How many flops macbook pro




















As others mentioned, the performance of the new MBA vs new Pro is similar. I think the MBA really highlights what's great about the first iteration of the m1. So, if you're ok with the smaller screen, I would go with the MBA. If you feel yourself leaning towards the pro, I would wait for the actual pro's coming later this year. Jonovono 8 months ago root parent prev next [—].

I do mobile dev and moved from a Pro to M1 Air and super happy with it so far. All my tools work great on the M1 and most are fully supporting the M1. I always like the smaller screen so even my pro was the smaller screen. And the m1 is an entry level part used in some of their cheaper products. OkGoDoIt 8 months ago root parent next [—]. We are setting our expectations so high But the fact that we have anything to look forward to at all is awesome.

These are super exciting times in the computing space. Processors have been boring for way too long. GeekyBear 8 months ago root parent next [—]. Quite a healthy bump. These are fake, posted here a few days ago. One of the fairly accurate Apple leakers, Jon Prosser, has said that one of his known good sources has confirmed the scores are legit. I think we'll see more changes to other parts of the system just based on the fact that now they control the entire silicon. Or impossibly good voice recognition that's always listening even while the machine is 'asleep' in your bag or on your desk.

We'll keep seeing more and more little coprocessors, accelerators, etc. And because all of this is done in the hardware it will be impossibly fast and have almost no hit to battery life.

The world is their oyster now, they aren't held back by Intel's whims anymore. Unless the M2 is a higher core version with cores? M1 is already built on a 5nm process and you're right about the upgrades being incremental and small.

How much will the average user notice the difference between 4 and 23 high-performance cores? This will have a huge impact for highly parallel tasks like rendering, but isn't it still the case that most software still isn't designed to scale horizontally in a meaningful way? The most obvious gain would be from all the multicore loads. But for more and more heavy computing tasks, those get more common too. And I would expect any variation of the M which is tweaked towards performance rather than low power as the M1 is, to also offer a little higher clock speeds.

Considering the M1 runs at 3 GHz and the most recent x86 at up to 5, there should be quite some clockspeed headroom, if Apple chooses to use it.

Instead of 4 cores shared between 8 apps, you could have the same 8 apps each with a dedicated core or two, even. A bigger battery means more room to boost clocks higher. I feel like the M1 chips kind of broke the timeline. I think the M1X - or whatever they call the chip line with a much larger thermal envelope - will blow our socks off.

During those 10 years, mobile went huge and PC demand stagnated. You can look up corporate revenues and news coverage to verify that for yourself, if you don't believe me. It's easy to see how that market environment gave Apple a leg up.

Anecdotally, I bought one or two new laptops in the 10 years prior to the pandemic, but I probably bought eight or nine different phones. And while there was a little PC innovation mainly at the earlier part of that decade , phone hardware got sooo much better. Snapdragon SoC, waterproof flagship phones, awesome cameras, 4G, the whole nine yards. In my mind, M1 is kind of an extension of that smartphone innovation, bridged from iPad and iPhone to the Mac.

Seems like they achieve double digit performance gains in each A series chip the last few generations. I need it for various aspects of the work I do. I think a mac pro level machine will be the interesting one. Put a bunch of these cpus in a system. I don't know if we can hope for expandability anymore, apple just might not do that anymore. Accujack 8 months ago parent prev next [—]. There are other benefits too like simplified motherboard design.

Now that Apple has taken the risk, other manufacturers will look at doing the same. Not all computers will use the SoC model, but for laptops and many desktops this will be a big win. Pretty much any HEDT x86 system has a more impressive memory system. Yes it has been spreading like plague. And I had to post something similar [1] not long ago. And many more before that. The performance improvement from a memory perspective is coming from Same Memory Address Space and other similar optimisation.

Going off package will at the very least increase power dissipation. Exiting a package, going across a PCB, and entering additional packages will increase capacitance significantly as well as increase resistance and inductance. This will impact performance. If the increased capacitance does not change actual operating speed, then the buffers are supplying more current to overcome the capacitance, not to mention the potential ringing and other undesired effects from the additional parasitics.

There is a penalty for going off the SOC. It is not just a narrative, it is physics. The same memory address space choice is of course important, but its performance and power envelope is impacted by the SOC vs. The combination of M1 performance and low power has happened due to a series of choices made by Apple. Forgoing user configurability and fixing memory choices at manufacture while using SOC tech made mainstream by the phone industry is one of those impactful choices.

There are of course several other important choices, but it is incorrect to discard this choice as non-impactful. This is something different which M1 has in common with game consoles and smartphones but not traditional PC's, isn't it?

MikusR 8 months ago root parent next [—]. But said shared memory is segregated. Toutouxc 8 months ago root parent next [—]. Uhm, as a games developer working on consoles There are probably tricks to read from each other RAM but not integrated like Apple has done. It memory on a SiP. Apple is running their stuff at or something. So that looks faster. One with faster or equivalent memory. Apple is going to sell these things by the truckload.

The M1 is a great chip, but that has nothing to do with the location of the memory. LPDDR4x is a standard memory type. And def not in a package close to apples. It's very common for high-end ultrabooks. Just look at the XPS 13 it runs at It's just the most common high-end memory speed currently nothing special about it.

Goz3rr 8 months ago root parent prev next [—]. Most i9 chips will happily run DDR as well. Also note frequency is only part of the whole picture.

CAS latency is important as well, which is much higher with Apple. I'd say that having massive low latency caches on die plays a larger role. The huge caches also appear to be extremely fast — the L1D lands in at a 3-cycle load-use latency. That cache is that big because the decode and ROB are so wide. Given that the decoders are already bigger than the integer units, I suspect that will be a hard thing to do. I get crashes with XMP, usually right at worst possible time on a zoom call where I am presenting!

And saying users should overclock just seems weird, apple works out of the box. For whatever reason, the overall memory system on the M1 systems just seems better than intel. I imagine the can do at least that. This may have changed since then Mid Performance per Watt.

Reducing the trace length makes it possible to get high frequency RAM working with acceptable power consumption. It has been said again and again, but this is a very common misconception. The memory is a soldered on extra chip. It's not on the SoC. That's some weird semantics. It is very much soldered separately, they are separate components soldered next to each other [1].

Ah, my bad - should have looked at the board. Why not both? Kernel in charge of "swapping". Maintains expandability while keeping most of the performance benefit. Accujack 8 months ago root parent next [—]. You're basically describing RAM caching. Putting all the ram physically close to the processor gives a giant performance gain that's mostly lost if any of the system's RAM is "remote" on the motherboard. Is there any concrete numbers to back the claim that "Putting all the ram physically close to the processor gives a giant performance gain"?

It doesn't. As has been corrected time and time again. The M1 has pretty high memory latency at around ns [1], which is significantly higher than either AMD or Intel for typical systems. Note that physical distance between CPU and memory is rather less important for latency, as DRAM is high latency in itself, so adding a few ns at most due to wiring is not going to matter.

NathanielK 8 months ago root parent next [—]. Not the most scientific, but userbenchmark is useful because it has latency graphs available for millions of systems. Thank You. If M1 discussion continues to be like this we have a possibly of stamping out M1 misinformation on HN. But sometimes we are just lazy to provide the context or to spell out everything.

These information is so readily available with a simple Google. And yet the past dozens of M1 thread this "memory" advantage thing keeps popping up. NathanielK 8 months ago root parent prev next [—]. What performance benefit? It's subjective, like in high end audio. It's a beast and it's also mega efficient. My other Macbook would be at 70C with the fan screaming. My daily driver is a rMBP -- the last 15" model, with the upgraded keyboard. However, it would be inconvenient to give up ports, and right NOW I still do some Windows virtualization, so I'm holding off.

May be worth noting that the battery life is much better on my M1 MBA than Intel devices, so you don't necessarily need to sacrifice one of the two ports for charging during the day. It depends upon your use profile, but for me it leaves both available ports for peripherals not just the one that you might expect.

As is, I'm feeding power and my Tbolt display on one side, and use the side facing me to occasionally juice up my keyboard or iPad. I'm not really interested in juggling plugs during the day. If the current rumors about the upcoming MBPs are true you really might want to wait for them. That's where I am. Same here - my beast hackintosh has now turned back into a barely used gaming PC, because Air is faster and more convenient.

There are new i5s done on 10nm SuperFin. In you needed 1, sq ft of supercomputing hardware to reach the same mark. For a little less distant comparison, you could buy a Sun E10k in Including the space around it, it would be a typical 1BR apartment full of compute. Also, I assume the M1 does significantly better than GFlops if you don't run it through the browser.

I assume the M1 does significantly better than GFlops if you don't run it through the browser. What's being measured is loading something on to the GPU and running it there. It doesn't make much difference how it gets to the GPU. OkGoDoIt 8 months ago root parent prev next [—].

Even more exciting would be a power use comparison, including for the air conditioning and other auxiliary support needs.

Exciting times we live in. A single E10k fully populated needs 11, watts, not including the needed air conditioning. So kW for 18 of them. The Macbook Air M1 would be something lower than 30W, since that's what the power supply is rated for. So the AC would need about 10 kW, or kW for all of them. All in all the computers in the example thus come out at north of kW for total power use. That's a power reduction by a factor of more than 13, in less than two decades! The equivalent of driving 80, km with a very modern gasoline car.

The mind boggles! Someone 8 months ago root parent prev next [—]. At 30W, you would run out of that in less than 2 hours but again, at full speed, you probably hit a heat limit earlier. Are we comparing single precision vs. If the current trend continues we will have a chip the size of a plank unit with a level computing capacity the size of the Jupiter.

Chat applications will be even slower then today. Bad timing, the industry is about to move towards direct imports and much simpler dependency management thanks to the evolution of JS and likewise the tooling.

If anything node modules will slowly fade away in the coming years. And not just with that node rewrite that links directly. Thank you for that info, I will consider it in the future. Any more info? DukeBaset 8 months ago root parent prev next [—].

If only I could give you gold. I opened Discord after 2yrs of not using it, remembering a simple Slack like chat that was slightly better. I opened it again recently with all my old channels and it looks like they added a hundred new features I have no idea what they do or never would need.

Discord is still a good product but makes you miss the simplicity. Wohlf 8 months ago parent prev next [—]. The GT is just about the furthest thing from high end.

Yup, thanks. Dylan 8 months ago root parent prev next [—]. Right, the low end models are excessively underpowered and only exists for niche cases or tricking people.

I like this metric: sq ft of supercomputer-year. It'd be fun to see comparisons of, say, phones, holding supercomputer-year constant. Now, imagine going back in time to and trying to explain to people in the supercomputer lab that 25 years later there would be legions of people using portable supercomputers because you need this to build javascript applications for websites.

Ok, my Pixel 4 phone has Gigaflops??? The Snapdragon does Gflops FP32 natively not in the browser. The M1 does 2. What difference does it make? WebGPU runs natively. How do you explain the difference between the 2. Thanks for pointing this out.

This does put things in perspective. While it's true that achieving that in the browser is something, this sounds much less interesting and a lot more like the usual hype. SekstiNi 8 months ago parent prev next [—]. LockAndLol 8 months ago parent prev next [—].

So much for an "amazing M1". So it's as performant as a chip that was in a phone released a year prior. A passively cooled M1 beats a Ryzen X, an i7, and an i9 in most benchmarks[0].

But if you want to pick this single benchmark so that you can conclude that the M1 is not better than a mobile phone SoC from a year ago, then you do you. Yet he has the x configured with only Mhz memory clock compared to Mhz on the M1. The single core performance of these chips is very impressive, especially at such a TDP, but in raw multi-core CPU power it simply does not beat a x. But if you want to pick one set of benchmark tests so that you can conclude that the X from two years ago is not better than an M1, then you do you.

The X wasn't the main point. I even own a X myself. In all fairness: I shouldn't have used that phrasing. The M1 hits roughly three times what the Snapdragon does outside of the browser. So it's about three times as performant in raw score. Maybe I should just try x86 node for now?

Performance is not everything if you were hoping to get some actual work done! I use Docker preview on my M1 everyday for work and it's great. I'm guessing you're running into issues with C dependencies. You can go also go full Rosetta: create an alias to your terminal, right-click enable "Run in Rosetta", open it, then everything you run from this terminal will also be Rosetta amd64 , and you run Rosetta homebrew, node, etc. You can confirm what is running under Rosetta in Activity Monitor.

Great tip! Docker tech preview seems to work pretty well so I'm not sure what you mean "especially docker". I've been using docker on the M1 and it's been fine. Also cross-compiling works great buildx. An interesting downside to the M1 macs for sure, at least for a while. Thanks for bringing it up. Actually, I've found the opposite. XCode stuff I found janky, poorly thought out and requiring arcane compiler flags set and a bunch of cryptic error messages.

Homebrew works great on M1. Now that Homebrew is up and running on Apple Silicon, I haven't found too much that doesn't work. MacPorts seems to mostly working too though at this point brew seems more up to date. If you want to get an M1 Mac for development, I haven't found too many things that don't work and most work very well. The compile speed on even an M1 MacBook Air is incredible. Have you tried running under Rosetta? Using Elm compiler under Rosetta for over 2 weeks, it's still at-least 2x faster than latest Intel mac.

Also using Atom editor with Rosetta which works okayish, vscode has got a M1 build which works well. Yes, and it still didn't work You probably won't get this performance when webgpu is finalized. It has to add bounds and sanity checks. It's unclear how much worse the perf will be. Memory operations in modern GPUs basically evolved from fetching textures which intrinsically have bounds checking built in, they have a width and height.

All modern desktop GPUs and probably mobile GPUs these days use "descriptors" for textures and buffers which specify both address and size. Out of range fetches from a buffer return 0 and out of range writes are no operations. Desktop GPUs reached this kind of performance nearly 10 years ago, but I guess its not bad for a first generation new design.

I guess it's not bad for a fanless watt SoC the size of a stamp on an entry-level portable computer in a browser. To get that kind of performance nearly 10 years ago in a desktop GPU, I bet you would need a whole lot of dollars, watts, and cube inches. It is impressive unless you compare apples to oranges. Plus, on bare metal it reaches 2. Entry level m1 is a quite a bit cheaper than that in euros.

Note that the price in Euro almost? That being said, I agree with all you said. I was looking at Mac mini m1 on the German Apple store, for Nearly 10 years ago a 2. Kinda proving my point then? Now add the rest of the components and the prices, wattage and size shoots up exactly as described. See, that's what I also thought, but I've heard that the built-in bounds checking can be pretty buggy. I'm not an expert though. Yeah, compared to modern desktop GPUs, which can hit probably times this, it's not that impressive.

That being said, they're also consuming times the power. Safari makes it substantially easier to enable WebGPU than Chrome does requiring a canary version and flags , which leads me to believe there's already some security mechanisms in place.

But, time will tell! I tried it with Chrome Canary with the relevant flag but unfortunately it didn't seem to work with this particular site failing with "TypeError: Failed to execute 'createBindGroupLayout' on 'GPUDevice': required member entries is undefined. Shame as I wanted to see what happened if I pitted my desktop against it. Of course it's likely the WebGPU implementations between browsers are not equivalent from a performance point of view.

Unfortunately the browsers haven't quite settled on a standard, so the code posted only works in Safari. The links above are affiliate links. As an Amazon Associate, Wccftech. By Omar Sohail. Oct 19, EDT. Share Tweet Submit. Each core of a Xeon has two floating point processors and can actually complete two floating point operations for each "tick" of the system clock.

It does that with a neat trick that combines a multiply and an add operation in one clock cycle. In practice, itis possible to achieve an efficiency of about 89 percent with Linpack on a Mac Pro, according to Dr. Srinidhi Varadarajan at Virginia Tech University. So the net computational power of the Mac Pro is 91 gigaflops using that benchmark. Thatis a ratio of 12, to 1 in speed for Roadrunner at Los Alamos.

Hereis a thought. A Mac Pro is about a thousand times faster.



0コメント

  • 1000 / 1000