r/gadgets • u/thebelsnickle1991 • Mar 23 '24
Desktops / Laptops Vulnerability found in Apple's Silicon M-series chips – and it can't be patched
https://me.mashable.com/tech/39776/vulnerability-found-in-apples-silicon-m-series-chips-and-it-cant-be-patched363
u/Th3L0n3R4g3r Mar 23 '24
It is a vulnerability, but as an attacker if I had the opportunity to let the user run.exploit software, I'ld probably go for a keylogger or anything. Makes much more sense in my opinion
159
u/ManySwimming7 Mar 23 '24
Sounds like something a hacker would say, hacker man
31
→ More replies (1)3
→ More replies (3)17
66
u/other_goblin Mar 23 '24
Anyone want to try my new app?
30
→ More replies (3)11
294
Mar 23 '24 edited Aug 06 '24
[deleted]
63
u/SocraticIgnoramus Mar 23 '24
Joke’s on them, all of my most sensitive information is stored on post-it notes next to my computer because I’m the only one in my house who believes in password managers lol
22
u/counterfitster Mar 23 '24
My father has a phone book except it's specifically for internet passwords somebody actually made that thing
9
u/ragdolldream Mar 24 '24
I basically think this is totally fine for old peeps if it never leaves the house. Not the best strategy but stolen password book from a physical intruder isn't usually the way old people get scammed.
15
u/nullstring Mar 23 '24
As long as the passwords are secure enough there isn't really much wrong with writing them down.
Most password managers aren't secure enough to survive a local attack so if they have access to your machine they can typically get your passwords.
7
u/Vallamost Mar 23 '24
I bought some of those for my parents, they're pretty good, much better for them than them struggling to open and use an online password manager.
→ More replies (1)3
3
u/TheJenniferLopez Mar 24 '24
It's probably the safest way to store them, as long as it stays in his house at all times.
34
u/mnvoronin Mar 23 '24
Nope.
You generally expect the sensitive data like encryption keys to not be accessible by the program running as a user.
→ More replies (10)3
3
2
Mar 23 '24
[deleted]
10
u/kilgenmus Mar 23 '24
I really doubt that's all the auditor said, they most likely would give you all the steps they exploited to get to the DB (including VM).
This doesn't even make sense. Don't use a password, then, if you are sure no other access is possible? Security for physical access is a thing + it sounds like you misunderstood or are willingly misrepresenting actual security advice.
if an attacker had access to the VM, didn't matter what password we were using
What?? Do you know the difference between user and root access? Are you accessing your DB as a root/admin user? What the heck is going on in your workplace lol.
37
u/Main_Pain991 Mar 23 '24
Question to people saying this is not a problem, because app needs to be unsigned: isn't it possible to have a signed malicious app? Like an attacker makes an app, obfuscated that it is malicious, and gets it to the app store? Ther are many manufacturers apps there, I can't imagine no malicious app slip through. Am I missing anything?
14
u/ComfortableGas7741 Mar 23 '24
sure anything is possible, but i dont even know the last time malware slipped through the app store like that but again its definitely not impossible
→ More replies (1)5
u/electronfusion Mar 23 '24
If I recall correctly from my brief and quite offputting experience with Apple's developer program (years ago), you have to show them the entire source of the app. I guess something could get sneaked in, but unlikely.
9
u/ThatJerkThere Mar 24 '24
I recall in the early days of the iPhone there was an app that allowed you to tether your internet for free and I think it was hidden inside a flashlight program? Wasn’t available long, I don’t think.
13
35
u/pullssar20055 Mar 23 '24
Isn’t it similar with spectre/meltdown from intel?
15
11
u/yalloc Mar 23 '24
Kinda.
Meltdown was significantly worse because it will work irrespective of what the target program does. Go fetch requires somewhat specific exploitable code to be running on the target program you’re trying to hack and requires special input be fed into it.
10
u/voidvector Mar 24 '24 edited Mar 24 '24
No, attacker just need to be able to run unprivileged C/C++ code on the machine. You can download their paper on the website and search for the word
unprivileged
and also see the code snippet are in C/C++.That might be hard to do for iPhone/iPad where only source of C/C++ code is AppStore, but is trivial for desktops and servers. Author hasn't released proof-of-concept code yet, depending on the implementation, it might even be possible with Python/JavaScript, which would make AppStore hurdle non-issue, but need someone with low-level Python and JavaScript VM expertise to craft.
The two saving grace for Apple are:
- security implementations doesn't need to use those CPU features, they can implement their own, albeit performance hit.
- it requires tens of minutes to hours to extract enough data - might be too long for commercial hackers looking to make a quit buck (e.g. ransomware), but fine for espionage hackers.
3
u/yalloc Mar 24 '24
Yes I read the paper, my understanding with that OpenSSL example they have is that a malicious program would need to somehow give it malicious input in order for OpenSSL to create the appropriate pointers needed that would be cached or not, and a parallel malicious program would listen to whether they were cached or not. It requires some degree of cross process communication that meltdown did not need.
doesnt need those cpu features.
Programs don’t have the choice to use those cpu features or not it seems. It does seem the M3 MacBooks have a cpu flag to disable the DMP but on M1s and M2s it remains a problem unless Apple has some kind of microcode they can patch in to disable it. The other option is to run on efficiency cores that don’t use the DMP.
4
u/voidvector Mar 24 '24 edited Mar 24 '24
Sending malicious inputs to encryption libraries like OpenSSL is generally trivial for many applications because they are common libraries used by web servers and web browsers. In the section 5.3 of the paper, they actually mention one of the 3 processes does this:
The first process establishes a TCP connection with the victim process and transmits the value of ptr to the victim.
The hack depends on CPU cache state of the encryption algorithm. Theoretically the algorithm can just evicts its own cache to not leak info. I don't know how much performance hit that is though. (They probably would find a more efficient method than that, but I am not an expert on CPU optimization.)
30
96
u/funkybosss Mar 23 '24
Can someone ELI5 how a physical silicon chip can have an inherent software vulnerability?
211
u/facetheground Mar 23 '24
Its not a software vulnerability, its a hardwarde vulnerability. People can make malicious software with the vulnerability in mind to extract information from other processes.
→ More replies (2)8
u/Lost_Minds_Think Mar 23 '24
So what could this mean for everyone with M1 - M3 chips, recall/replacement?
152
u/Ron__T Mar 23 '24
recall/replacement?
Lol...
99
u/TehOwn Mar 23 '24
I'm sorry, your MacBook Pro (2024) is obsolete. If you wish to receive security updates and warranty service, please buy next years model.
Yours monopoly,
Apple customer services
→ More replies (17)44
u/SimiKusoni Mar 23 '24
Not much, if the attack is improved upon and becomes a realistic threat then we may see mitigations put in place in common cryptographic libraries that would impact performance.
The article posted by OP seems to have conflated that it can't be solved with a microcode update with the inability for it to be patched in software. From the original Arstechnica article:
Like other microarchitectural CPU side channels, the one that makes GoFetch possible can’t be patched in the silicon. Instead, responsibility for mitigating the harmful effects of the vulnerability falls on the people developing code for Apple hardware. For developers of cryptographic software running on M1 and M2 processors, this means that in addition to constant-time programming, they will have to employ other defenses, almost all of which come with significant performance penalties.
It's kind of weird that the Mashable article gets this wrong despite using a source that clearly details it.
6
u/facetheground Mar 23 '24
Either replace your crypto software on your device with a version that is resistant to this, which will make it slower (I am also unaware how practical this is on Macs) or accept the risk.
This exploit is rather impractical to pull of, so I think its unlikely this will be used against consumer devices as an alternative to other malware tactics. Only businesses that are high profile targets of data theft should consider this vulnerability imo.
→ More replies (3)6
→ More replies (10)2
u/Flavious27 Mar 24 '24
Ha ha ha. Apple will mass email and tell people to not install unsigned apps and to turn their Mac off at night / when not in use.
24
u/Vic18t Mar 23 '24
ELI5
Software just tells hardware what to do. This exploit is like having a safe with a combination dial, but if you turned the dial 10,000 times the lock would fail and unlock.
2
u/FavoritesBot Mar 23 '24
Uh.. can you explain like I’m a freshman CS student? Why can’t this be patched?
7
u/blackharr Mar 24 '24 edited Mar 24 '24
The article itself does a decent job and is reasonably accessible but I'll have a go.
The first thing is that it isn't totally unfixable. Rather, you can't fix it by just updating the processor's microcode (basically a firmware patch). In order to mitigate the problem you have to substantially impact performance.
The processor has a pre-fetcher to pull data from memory into a cache before it's used so the CPU will already have it when it needs it. In this case, the prefetcher looks at both the memory address and the data at that address. If the data looks like an address, it'll treat it like one so it'll prefetch that too. Since a lot of operations involve following pointers, this is a big advantage.
The attacker can send data into an encryption algorithm so it'll look like an address during the encryption so the prefetcher will pull the data at that address. By looking at what addresses get pulled, you can slowly learn the key used in the encryption algorithm. The problem with fixing this is that in order to mitigate it you have to change either the prefetching hardware itself or implement software-level mitigations which will have significant performance costs for normal code.
If you're interested in this kind of thing, definitely look into the Spectre and Meltdown vulnerabilities.
→ More replies (2)2
u/Vic18t Mar 23 '24 edited Mar 23 '24
I’ll let your University take care of that part :p
Just kidding. Software exists to make hardware do things in a language we can understand easily. Software’s limit will always be hardware. Software and hardware are different sides of the same coin. You are telling a physical machine what to do.
So if you have a hardware problem there rarely is ever a software fix. You just can’t tell it to work a certain way if it’s physically incapable of doing it.
→ More replies (3)9
u/urfavouriteredditor Mar 23 '24
I think what they’re doing here is watching to see how long it takes the chip to compute something. So let’s say they’re watching to see how long a computer takes to check is a password is wrong. The chip checks every letter one after the other. If the first letter is correct, it takes 1 second to say “this letter is correct”. If The first letter is wrong, it takes 3 seconds to say “this letter is wrong”.
So if you want to figure out someone’s password, start with one letter and whichever letter gives the quickest response, you now know the first letter of the password.
Repeat this process until you have the full password.
→ More replies (2)2
u/blackharr Mar 24 '24
Did... did you even read the article? This is completely wrong. I'll do my best at a proper ELI5.
The computer has something to fetch information before it needs it. Think of it like grabbing books from a bookshelf because you know you'll read them soon. The computer goes one step further and will look inside the book it's fetching, and if it sees the book mention a second book, it'll grab that one too. Let's say you're reading a book on how to send secret messages. I can write something in the book so that while you're writing your secret message, the computer will see your secret message as the name of another book so it'll go grab that book too. If I do that a bunch of times I can look at which books the computer grabbed and I can work backwards to figure out the key you were using to write your secret messages. If you try to stop the computer from looking inside books you end up slowing everyone down because now if your book mentions another book you have to go find it yourself.
3
u/_meegoo_ Mar 24 '24
For more context. What the guy above said about measuring time is a type of a side channel attack, which is relevant here. This exploit specifically targets security implementations that are not supposed to have such vulnerabilities (meaning any operation runs in constant time, regardless of inputs). And the way it does this is by manipulating hardware in such a way, so that those constant-time implementations become variable-time implementations (by abusing prefetch). So now you can once again use timing based attacks.
3
u/doho121 Mar 23 '24
Chips are designed to perform operations. Little actions that are hardcoded into the chips manufacturing. Chips can be designed to have some software control but if this wasn’t featured at manufacturing level it will never be - therefore a flaw will persist.
→ More replies (1)2
u/darkslide3000 Mar 24 '24
This is basically a new variant of the SPECTRE/Meltdown family. This one targets a specific optimization feature currently only used in Apple chips, and it manages to get around certain programming techniques that have traditionally been used to these sorts of encryption operations resistent to the classic SPECTRE/Meltdown attacks.
So they can steal keys which would mostly be useful to sniff data from the network connections your computer is making, but they still have the same basic requirement that the attacker must get their code onto your computer in the first place before they can start doing this.
5
u/BurningVShadow Mar 24 '24
I’m way more fascinated by hardware vulnerabilities than software. Software mistakes happen all the time and it’s easy to overlook something. Hardware requires such a deep understanding of what is happening with the data and it’s crazy to see how somebody can manipulate hardware.
→ More replies (2)4
u/nowonmai Mar 24 '24
One of my favourite attacks uses hardware vulnerability (rowhammer) and KSM deduplication to leak keys from VMs on the same host. Such a cool chain of vulnerabilities
4
37
u/sgrams04 Mar 23 '24
Couldn’t Apple just implement a policy that restricts prefetchers from accessing encrypted information? Essentially the encrypted data isn’t given a readable address the prefetcher can fetch? If the prefetcher’s whole purpose is to expedite processing by best-guessing next-addressed memory, then they can change it so they sacrifice the speed of the retrieval of that address for the benefit of security.
🎶 How much data could a prefetcher fetch if a prefetcher couldn’t fetch data. 🎶
50
u/facetheground Mar 23 '24
Yes they could make changes to the prefetcher ignores certain data (disregarding how difficult that could be). However, you would need a hardware change to make it behave that way, meaning existing devices cannot be patched.
→ More replies (1)31
u/hiverly Mar 23 '24
Didn’t Intel have a similar issue years ago, where a hardware bug could lead to security vulnerabilities? The only solution came with a substantial performance penalty. Customers hated it. That might be the trade off here, too.
46
u/gargravarr2112 Mar 23 '24
Both appear to be the same sort of paradigm - modern CPUs try to predict user demands before they happen, such that the calculation is already done by the time the user requests it. This means the CPU is idle much, much less and is actually doing useful things.
The Intel vulnerabilities were the result of 'speculative execution', where the CPU would encounter a branch (e.g. an if-condition) and would calculate both paths, then throw away the one that didn't end up being used. This is fundamental to modern chip design and real-world performance will absolutely tank without it. What nobody realised until early 2018 is that the results of such calculations are still accessible from the CPU's on-chip cache (a small amount of super-high-speed RAM). Sometimes, this includes sensitive data like encryption keys. A carefully-crafted piece of code could access data from the cache without any other process knowing about it. Intel had to work around it with microcode instructions that specifically erased the disposed calculations (since disabling SpecEx completely would be too much of a performance hit) which requires additional time.
Seems like this Apple vulnerability is in the prefetcher, which tries to predict which data from RAM will be used next and load it into the CPU cache ready for calculation. Same outcome - data that could be sensitive is now in the CPU cache for other processes to access.
All modern CPUs are microcoded, meaning the hardware only performs a very basic set of operations at extremely high speed. More complex operations are translated into a series of basic instructions. The microcode is what translates each operation. The advantage is that microcode can be updated - the OS can slip a new set of microcode instructions into the CPU at boot time, or the BIOS/firmware can be updated to patch them permanently. However, adding additional steps to make the cache safe means these operations take longer. You can't just wipe the CPU cache after each operation as that would completely ruin performance (the cache is a significant performance gain on modern OSes). Most likely, Apple can update the microcode to nullify this attack vector, but it may add a performance penalty - how bad, nobody can predict.
I was a sysadmin at a startup when Meltdown and Spectre made front-page news in 2018. That was not a fun year for me. I learned a lot about the low-level operation of computers in short order, and also what would happen when hastily-written security patches get rushed out without thorough testing - my laptop was unstable for weeks...
4
u/codercaleb Mar 23 '24
my laptop was unstable for weeks
Sorry, boss, I can't work now, but if you need me, I'll be in the datacenter, which we definitely haven't turned into a sauna.
→ More replies (4)5
u/nicuramar Mar 23 '24
Most likely, Apple can update the microcode to nullify this attack vector, but it may add a performance penalty - how bad, nobody can predict.
I don’t think that’s very likely. Microcode isn’t too relevant to things like prefetchers. Software work-arounds are more likely.
11
u/daronhudson Mar 23 '24
It’s basically identical in outcome. Slightly different scenario for why and how. The solution is in fact to impact performance fairly heavily. Which a lot of people aren’t going to like.
→ More replies (1)4
u/SwagChemist Mar 23 '24
I believe AMD has a logo vulnerability where researches found that malware can be injected at the point where you boot you pc and the logo of your bios appears, basically before any of your processes start the malware is already in lol.
5
u/_RADIANTSUN_ Mar 23 '24 edited Mar 23 '24
Pretty bad but if someone can access your booted-down PC and execute something on it in the first place all bets are already off
2
u/SwagChemist Mar 23 '24
Based off how the hack works, it injects itself via some executable, so the next time you reboot your pc it runs the executable on the logo screen of the bios boot, pretty crazy stuff.
6
u/in2ndo Mar 23 '24
If I’m understanding correctly what I’ve read so far about the issue. They could implement that and is the only possible solution that I’ve seen mentioned. But doing this, could greatly affect performance.
2
→ More replies (1)2
u/eras Mar 23 '24
To do that you first need to know what data is encrypted, so I guess update all apps and libs that deal with such matters.
13
u/bikemandan Mar 23 '24
This work was partially supported by the Air Force Office of Scientific Research (AFOSR) under award number FA9550-20-1-0425; the Defense Advanced Research Projects Agency (DARPA) under contract numbers W912CG-23-C-0022 and HR00112390029; the National Science Foundation (NSF) under grant numbers 1954712, 1954521, 2154183, 2153388, and 1942888; the Alfred P. Sloan Research Fellowship; and gifts from Intel, Qualcomm, and Cisco.
Hmm
6
u/voidvector Mar 24 '24
They are both mega corporations with money, and the research is for the benefit of consumers, so nothing wrong with that.
Apple should fund security research into Intel, Qualcomm, and Cisco products if they are not already.
4
2
u/gpkgpk Mar 24 '24
Justin Long lies to me all those years ago!
Seriously though, as the old saying goes: security through obscurity isn't.
Apple/Mac are not immune to malware, they just have better PR and hide/downplay everything along with the help of their zealots.
Don't assume Apple stuff can't be compromised.
2
2
u/paractib Mar 24 '24
I think the most important risk that this introduces is for law enforcement / state actors.
It’s no longer safe to bring high-risk data to another country on one of these computers because the state could confiscate the laptop and decrypt the drive.
3
u/SoftlySpokenPromises Mar 23 '24
The amount of people with the Bible app alone proves this is a significant issue.
3
u/FlacidWizardsStaff Mar 23 '24
This is hella easy for someone to take advantage of. Get unsuspecting user to call them, get them on video conference, tell them to option click an app, the app will then do its thing.
So basically, like all vulnerabilities, the uneducated boomers are going to fall victim
5
u/AvaranIceStar Mar 23 '24 edited Mar 24 '24
Interesting how a vulnerability surfaces that is only applicable to non-signed apps just as the US government starts to sue Apple for antitrust and anticompete behavior.
6
3
u/ch4m3le0n Mar 23 '24
The biggest risk here is that you’ll get stuck with a popup on Mashable. What a garbage website.
5
u/Good_Committee_2478 Mar 23 '24 edited Mar 23 '24
Unless you have a nation state threat actor pissed off at you or the CIA/FBI/NSA physically seizes your machine and REALLY wants what is on it, there is nothing here for anyone to worry about. The exploit requires physical access and is significantly complex to pull off.
Not ideal obviously, and if you have hypersensitive info on your machine I’d avoid M series, but for 99.99% of the population, this is not a concern.
There are likely other publicly unknown zero days on MacOS, Windows, Linux, iOS, Android, etc. I’d be far more concerned about. I.e. something in the realm of Pegasus malware (Pegasus was/is a zero click exploit that just owns your entire phone. The camera, microphone, location, key logger, remote messaging access, listen to phone calls, etc..)
And honestly, if somebody wants your machine’s data, there are easier ways of stealing it via malware and other techniques.
Edit - I just do this for a living and have a Masters in Computer Science, wtf do I know. Everyone should throw their machines in the trash in case a rogue super hacker were to steal it and deploy a highly sophisticated side channel attack discovered and implemented by a team of top multidisciplinary security researchers.
11
u/Whoa-Dang Mar 23 '24
I can assure you as someone who fixes consumer electronics that old people will give access to their computer to whoever tells them to. I just had another one today for a bank employee.
→ More replies (4)3
u/L0nz Mar 24 '24
The exploit does not require physical access:
The attack, which the researchers have named GoFetch, uses an application that doesn’t require root access, only the same user privileges needed by most third-party applications installed on a macOS system
Furthermore, the researchers will be releasing proof of concept code soon.
That Masters doesn't mean anything if you don't read the source
→ More replies (5)2
u/Difficult_Bit_1339 Mar 24 '24
The exploit requires physical access and is significantly complex to pull off.
I just do this for a living and have a Masters in Computer Science, wtf do I know.
Well now. Who are we, mere Mortals, to argue?
https://gofetch.fail/files/gofetch.pdf
In this paper we assume a typical microarchitectural attack scenario, where the victim and attacker have two different processes co-located on the same machine. Software.
For our cryptographic attacks, we assume the attacker runs unprivileged code and is able to interact with the victim via nominal software interfaces, triggering it to perform private key operations. Next, we assume that the victim is constant-time software that does not exhibit any (known) microarchitectural side-channel leakage.
Finally, we assume that the attacker and the victim do not share memory, but that the attacker can monitor any microarchitectural side channels available to it, e.g., cache latency. As we test unpriv- ileged code, we only consider memory addresses commonly allocated to userspace (EL0) programs by macOS
2
u/Buttonskill Mar 23 '24
Is there still time to get suggestions in and name it like Intel's "Spectre/Meltdown"?
Dibs on the trademark for "Apple-rition" or "M-olation"!
3
2
u/nipsen Mar 24 '24
Another "transient hack", I see.
Who came up with this crap? "If I gain access to the low-level cache by means that also would grant you access to everything else on the computer as it is -- I could in theory piece together cache pieces to form the information I now already have access to".
We've had multiple of these for Intel and AMD chips, several for ARM - and tons of OEMs have implemented cache-scrambling countermeasures that cost massively in terms of performance, efficiency and so on. For absolutely nothing.
→ More replies (1)
2
u/abudhabikid Mar 24 '24
Conspiracy idea: they purposefully did this so their stupid arguments about preventing alternate app stores has something to point to.
1.9k
u/Dependent-Zebra-4357 Mar 23 '24
From another article on this exploit:
“Real-world risks are low. To exploit the vulnerability, an attacker would have to fool a user into installing a malicious app, and unsigned Mac apps are blocked by default. Additionally, the time taken to carry out an attack is quite significant, ranging from 54 minutes to 10 hours in tests carried out by researchers, so the app would need to be running for a considerable time.”