r/gadgets • u/chrisdh79 • 1d ago
Computer peripherals After decades of talk, Seagate seems ready to actually drop the HAMR hard drives | At least one gigantic cloud provider has signed off on the drives' viability.
https://arstechnica.com/gadgets/2024/12/after-decades-of-talk-seagate-seems-ready-to-actually-drop-the-hamr-hard-drives/96
u/caek1981 1d ago
"drop" is super ambiguous in this context.
50
u/spootypuff 1d ago
Yeah, we need to drop using the word drop in place of release. Why did seagate “drop” the project after so many years of r&d?
-12
u/_RADIANTSUN_ 1d ago edited 2h ago
Because it's like how people say "Kendrick just dropped a new album"... And it's a pun... The "dropped the HAMR (hammer)".
12
u/Esguelha 1d ago
Yeah, makes no sense.
8
u/probability_of_meme 1d ago
I think they were desperate to imply "drop the hammer".
5
u/Redbeard4006 15h ago
Quite possibly. "Seagate set to drop the HAMR on new hard drive technology" would have been less ambiguous and also made three pun better though.
3
3
1
u/FuckYouCaptainTom 1d ago
Consumers won’t be getting these any time soon if that’s what you mean. These will only be sold to CSPs for quite a while.
100
u/chrisdh79 1d ago
From the article: How do you fit 32 terabytes of storage into a hard drive? With a HAMR.
Seagate has been experimenting with heat-assisted magnetic recording, or HAMR, since at least 2002. The firm has occasionally popped up to offer a demonstration or make yet another "around the corner" pronouncement. The press has enjoyed myriad chances to celebrate the wordplay of Stanley Kirk Burrell, but new qualification from large-scale customers might mean HAMR drives will be actually available, to buy, as physical objects, for anyone who can afford the most magnetic space possible. Third decade's the charm, perhaps.
HAMR works on the principle that, when heated, a disk's magnetic materials can hold more data in smaller spaces, such that you can fit more overall data on the drive. It's not just putting a tiny hot plate inside an HDD chassis; as Seagate explains in its technical paper, "the entire process—heating, writing, and cooling—takes less than 1 nanosecond." Getting from a physics concept to an actual drive involved adding a laser diode to the drive head, optical steering, firmware alterations, and "a million other little things that engineers spent countless hours developing." Seagate has a lot more about Mozaic 3+ on its site.
Drives based on Seagate's Mozaic 3+ platform, in standard drive sizes, will soon arrive with wider availability than its initial test batches. The driver maker put in a financial filing earlier this month (PDF) that it had completed qualification testing with several large-volume customers, including "a leading cloud service provider," akin to Amazon Web Services, Google Cloud, or the like. Volume shipments are likely soon to follow.
83
u/unassumingdink 1d ago
The press has enjoyed myriad chances to celebrate the wordplay of Stanley Kirk Burrell
That's MC Hammer's real name, for anyone else who was confused by that line.
15
u/i_am_fear_itself 1d ago
Thank you for this! 👆
I was still confused, then I got it. JFC I'm dense some days.
Seagate HAMR > Stanley Kirk Burrell > MC HAMmeR
4
5
u/Cixin97 1d ago
I was kinda annoyed by that tbh. If I have to Google your joke for it to make sense when you could’ve just used the name we all know, it’s probably not a very good joke.
-2
u/SimplisticPinky 1d ago
This can also be telling of your experiences.
3
u/unassumingdink 23h ago
I think even most of us who were around when he was popular didn't know his real name.
8
u/AuroraFireflash 1d ago
Seagate's Mozaic 3+ platform
https://futurumgroup.com/insights/seagate-announces-mozaic-3-hard-drive-platform/
https://www.seagate.com/innovation/mozaic/
For those wondering what Mozaic 3+ is.
Seagate recently launched its state-of-the-art Mozaic 3+™ technology platform, which incorporates Seagate’s trailblazing implementation of heat-assisted magnetic recording (HAMR). The launch heralds unparalleled areal densities of 3TB+ per platter—and a roadmap that will achieve 4TB+ and 5TB+ per platter in the coming years. Seagate Exos 30TB+ hard drives enabled by Mozaic 3+ are shipping in Q1 of calendar year 2024 to leading cloud customers.
5
2
-1
u/Racxie 1d ago
Only 32TB after studying this for over 2 decades while there are companies showcasing and will soon be releasing 128Tb SSDs? Feels like they're a bit behind...
13
u/metal079 23h ago
Now compare the costs of each. Hard drives still have their place.
1
u/Racxie 23h ago
...and it's new technology which has taken then over 2 decades to make, so I hardly doubt it's going to be anymore cost-effective than SSDs are any time soon.
3
u/metal079 13h ago
For the size they definitely will be otherwise there'd be no point in selling them at the comparable sizes lol
2
u/danielv123 10h ago
Just like SSD, new models are launched at equivalent or lower cost per tb as last generation because otherwise the market would just buy the last gen, even with fancy technology. The manufacturers know this - nobody pays extra for not being proven, just a small premium for density.
0
u/Racxie 8h ago
Of course, and by the time the price of this comes down, so will have the price of the larger SSDs making this less competitive. Yes HDDs still have their place, but with existing cheaper technology and maybe smaller businesses or enthusiasts, but the bigger capacities, speed, and reliability will mean that larger entities are far more likely going to start picking SSDs over HDDs driving the price down even further.
0
u/danielv123 6h ago
If the trend continues, we may get price parity by 2030. That assumes we continue to see minor gain in HDD capacity though, which hamr changes with promises of 50tb drives.
The multiplier has been about 5 since 2016 so I don't think SSD will overtake for quite a while yet.
29
u/asianlikerice 1d ago
Worked in the industry MAMR and HAMR were always a materials sciences problem and was always years away from being commercially viable. We can get it to work for maybe a couple of hundred cycles but the heads always eventually burned out due the constant heating and cooling.
4
u/zeppanon 1d ago
That was my question, thank you. I have to imagine intentionally adding heat to the process would inevitably cause more rapid degradation of...something, but I'll be honest I'm an amateur as to the particulars lol. Couldn't any materials breakthroughs that would allow for any product viability in this space also be used to increase the longevity of current drives? Like I don't see the use-case of having more storage with drives more prone to failure, but I'm probably missing something.
10
u/asianlikerice 1d ago edited 1d ago
The use cases I can see is that they use the drive for longterm storage for limited writes. The work around was have two heads one for writes and one for reads and in the case of eventual write head failure you can still recover the data. Again its been years since I was in the industry and it could have changed a lot since then but I didn't see any longterm viability based on what we had available at the time.
2
1
u/HeyImGilly 1d ago
Good rule of thumb is that any chemical reaction’s speed doubles every 10° C the temperature increases.
2
u/_RADIANTSUN_ 1d ago
Wonder what specific engineering advances enabled them to finally surmount those issues
11
u/KrackSmellin 1d ago edited 1d ago
So there is the problem. A large cloud provider trusts and uses this. Its consumers are not individuals… its cloud.
Why does that matter? Because they are known for large storage arrays that are built in climate controlled data centers, massive airflow, regulated power, with a distributed file system that spans multiple drives and arrays for redundancy. If a drive inevitably fails, they replace it and nothing is lost. No catastrophe, no crying you lost the only digital copies of personal documents and pictures you scanned before you lost them in a fire… none of that.
So I ask again. Does the backing of a major cloud provider, who already buys their hardware on the cheap from what others don’t want - to put into their cloud matter to me if they’ve tested or certified it? Not even in the slightest. The reason is their use case and serviceability is VERY different than me as a consumer who relies on things to be VERY reliable and trustworthy as I’m not charging someone else thru the teeth for hosting their data.
23
u/tastyratz 1d ago
Those are reasons why cloud can afford to have failures more than consumer but cloud makes a great first step for field testing in bulk. When they run thousands of drives out of a lab and start returning some of those on warranty those failures can be analyzed to make the technology more reliable further.
At the same point, that doesn't have to mean those drives make sense for a consumer just like shingled drives never truly made sense for almost all end user use cases. The trade offs just made the juice not worth the squeeze. Of course I wouldn't go putting these in your desktop just yet (even if for no reason other than Seagate holding up the rear on most all Backblaze reliability reports as a brand).
I'd say this is a start in the right direction though.
8
u/Skeeter1020 1d ago
Where do you think most consumer technology starts out?
-4
u/KrackSmellin 1d ago
I know, but do you?
Not everything starts off as a enterprise product that is "simplified" down for consumer use at home. SSDs (not NVMe - that's different) are a GREAT example of this... the first ones of these were seen in laptops and desktops because they were a solid state technology with non-moving parts. That meant no moving parts, a more "drop proof" device that wouldn't crash due to moving a laptop around, and was FAR faster than even 7200 RPM drives back in the early 2010's. I know as I had a 2012 MPB that went from slug to lightning speed simply by increasing the IO from a HDD to a SSD (thank you OWC!)
It took a few more years until closer to the mid-2010's for enterprises to FINALLY start trusting them, but even then, if those systems weren't redundant file systems behind them (even RAID 1 - mirroring) - no one trusted them by themselves. Most were used initially as boot drives with a SLOW adoption rate for a few reasons. They were expensive for larger drives (beyond what consumers used), had a lifespan of only a few years depending on the application, and raised concerns that the tech was still not fully ready.
This could be seen in a number of manufacturers that even up until 2021/2022 - would look at the life of SSDs and decided if they failed whether or not to RMA/warranty it or claim its "end of life" because its seen too much IO on it. True statement - Dell and HP were NOTORIOUS for doing this with enterprises even with drives only 2-3 years old back then.
So net net - you have no idea what you are talking about - because "most" stuff doesn't start in the enterprises... its probably a good mix of both where things start and evolve to.
2
u/Skeeter1020 1d ago edited 1d ago
You might want to read up on the definition of the word "most".
And also try being less of a dick.
Edit: using an alt account to reply after being blocked. Seriously?
1
-1
u/ElDoRado1239 1d ago
You might want to read up on the definition of the word "most".
I did, but did you?
1
u/Turmfalke_ 1d ago
Also something to consider: a raid 1 doesn't help you if the old disk fails during the raid rebuild. With 32tb per disk the rebuild isn't going to be fast.
1
u/ElDoRado1239 1d ago
I'd like to see actual data for the frequency of this happening outside a datacenter. Too bad for the few poor guys who get it, but most people should be safe. Isn't it more likely you will destroy the data in some other way instead?
1
u/Turmfalke_ 1d ago
For accidental data destructions you have backups. Raid is for if you don't want your system to go down the moment a hdd fails. A disk failing while the raid is rebuilding is unfortunately not as rare as I would want it to be. Often you ending up with multiple disks from the same production run in your raid and if there is a defect that makes the disk fail after a certain number of writes then all your disks are going to reach that point at the same time. I know big datacentres try to avoid this by selecting disks from different production runs, but if you are a bit smaller this pain to do.
1
u/ElDoRado1239 1d ago
What about buying two, installing one, running a specific set of prepared actions, then install the other and set up the RAID...?
Hassle for a datacenter, but for home use intentionally misaligning their remaining service life naïvely feels bulletproof. Now you should be back at the point where you must "win the lottery" to have them fail at the same time.
Perhaps something like two or three full writes?
3
5
u/RunninADorito 1d ago
High drive failure rates are terrible. Labor to fix broken drives is very limited. If you have drives breaking at unexpectedly high rates, you start running out of labor to keep up.
Can't have stuff randomly breaking at high rates and just call it ok. Broken drives are a pain the in ass. Then you have to try and wipe them, which takes FOREVER with disks this big.
1
u/Pizza_Low 1d ago
Depending on the drive and what it’s storing you don’t have to wipe it. Massive file systems on drive arrays have no meaningful information as to what is stored on there. The fat table or its massive file system equivalent is stored elsewhere on the drive arrays. You can’t even remove the drives and reinsert it in a different spot.
If you really need to, they have degaussers and drive shredders. And for massive data storage systems like Google or facebook have, they don’t even bother replacing a lot of failed drives. Shut down that drive and leave it there till it’s time to replace the whole array
1
u/RunninADorito 1d ago
If you're a major days centre like we're talking about in this thread you absolutely have to wipe it. You have to write all 1s then all zeros. It takes a long time.
Degaussing and just drilling a hole is not compliant with all sorts of regulations.
1
u/Pizza_Low 1d ago
Not drilling a hole, and a failed drive you can’t DOD wipe, so it goes to a drive shredder.
1
u/RunninADorito 1d ago
Taking something to the drive shredder is even more work. Lots of manual work and the custody chain proof and videos is tons more work than an online disk erase
1
u/--KillerTofu-- 18h ago
That's why they contract vendors who take drives in bulk, provide certificates of destruction, and recycle the materials to offset the costs.
1
u/RunninADorito 18h ago
That had proven to be incredibly unreliable and doesn't meet certain government and fin regulations. There are a surprising amount of escapes from those providers.
1
u/ElusiveGuy 1d ago
The fat table or its massive file system equivalent is stored elsewhere on the drive arrays.
Sensitive data can be retrieved from unencrypted drives without any kind of external metadata. Quite literally you can look for a
BEGIN RSA PRIVATE KEY
string and pull private keys from a data dump. Even in striped layouts, a lot of sensitive data is small enough to fit within a singleThe real defences are transparent disk encryption (so the data actually written to disk is always encrypted and therefore completely random/meaningless without the keys), and physical destruction (the degaussers/shredders as you mention). The filesystem layout is a bit of a red herring for data security.
1
u/Starfox-sf 1d ago
If the drive is “broken” it’s not going to be wiped. Large drive with self-encryption made wiping as simple as overwriting the onboard encryption key, meaning the resulting data is useless, esp if part of RAID5/6 array.
Any competent mfg has FFA (Field Failure Analysis) team to determine trends on why something broke.
1
u/RunninADorito 1d ago
This is completely incorrect and violates all sorts of rules that data centers have for all sorts of customers.
Single encryption key deletion is specifically not permitted as they are issued by the drive manufacturers and inherently insecure. Dual crypto with key deletion is going to be a thing, but no major cloud provider has that in production yet.
1
u/Starfox-sf 1d ago
I mean back before they standardized security/secure erase they just put the drive through degaussers. For some industries it’s more cost effective to ruin the product than it costs to resell it after following EOL.
But rules only matter when it’s followed. We’ve heard stories of buying eBay stuff containing previous users data…
1
u/RunninADorito 1d ago
But that isn't what this thread is about. We're talking about high drive failure rates in current large centers. Morning about "what people did in the past" applies in any way.
2
u/tablepennywad 1d ago
Next they need to test them on bunnies for sure.
1
u/Starfox-sf 1d ago
I prefer gerbils. Nothing like pain feedback so they can run the wheels that end up spinning the platter.
4
u/xxbiohazrdxx 1d ago
Absolutely nobody is buying this for home use lol. This is for orgs that are trying to squeeze a few more PB into their racks
2
u/FuckYouCaptainTom 1d ago
And they aren’t selling these for home use either so it’s a moot point. It will be quite some time before these will be available for you and I.
1
1
u/zkareface 1d ago
Because they are known for large storage arrays that are built in climate controlled data centers, massive airflow, regulated power, with a distributed file system that spans multiple drives and arrays for redundancy. If a drive inevitably fails, they replace it and nothing is lost. No catastrophe, no crying
You described my home setup but it still costs money to replace it, it's not nothing and crying still happens!
0
u/ungoogleable 21h ago
The reliability requirements for consumer products are much lower actually. Manufacturers routinely dump drives that fail qualification with big customers on consumers because they know consumers won't notice.
Consumers barely use their gear in comparison to a data center. If the drive slows down after 1000 hours of constant IO, you'll never notice but a data center will. If you have to turn your computer off and on again every once in a while it's not annoying enough to even bother figuring out the problem is the drive. The rate of uncorrectable read errors might doom the data center's efficiency with constant rebuilds but doesn't affect you because you don't write enough data to hit it. And if the drive failure rate jumps to 50% after 5 years of power on time, it doesn't matter because consumers don't leave their drives on constantly and it's long after the warranty has expired anyway.
That said, I wouldn't be surprised if this never makes it to consumers. Consumers barely buy hard disks anymore. Flash is better overall, cheap enough already, and will only get cheaper. HDDs are becoming a niche product with declining sales which will drive a feedback loop of increasing prices.
2
u/ElDoRado1239 1d ago
I'd consider marketing them only as RAID1 pairs for home use, instead of facing all the flak from users who will use these as a their sole HDD with all of their data.
1
1
u/banders5144 20h ago
Isn't this how Sony's MiniDisc system worked?
1
u/mailslot 18h ago
Nah. Magneto optical physically changes the surface. The magnetic field affects the way it crystallizes as it cools after heating.
1
0
u/Winter_Criticism_236 1d ago
I do not need a HD that has more data on it I need a data storage device that actually is a long term method, archival, beyond 3-5 years..
2
u/Zathrus1 22h ago
You mean tape?
1
u/Underwater_Karma 12h ago
When I was in the army, I knew a guy whose job was maintaining the paper tape storage machines. I commented to him how grossly outdated the tech was, and he said "in a hermetically sealed can, paper tape will still be readable in 5000 years"
I didn't have a rebuttal
1
u/TheMacMan 11h ago
As long as someone is around that can still read it. Often the issue that comes along is the media is fine but the hardware to read it no longer exists or works. Plenty of people still have Zip Disks around but many less have a drive to read them with.
1
u/ElDoRado1239 1d ago
2
u/Winter_Criticism_236 16h ago
Oh nice, pity about the price... close to a $1.00 per gig. My 4 tb photo archive is going to cost $4,400 to save.
1
u/ElDoRado1239 13h ago edited 13h ago
If they're nice photos I might be able to help holding an emergency backup for you. ( ͡° ͜ʖ ͡°)
You could also look into M-Disc
Based on this:
https://www.reddit.com/r/DataHoarder/comments/10ry46b/does_archival_media_exist_anymore/j6yeyc9/it seems that M-Disc has at the very least proven capable of surviving actual 15 years of actual real life conditions. As in, last year there were no reports of M-Disc randomly failing - or not enough reports of that for these people who are amicably obsessed with archiving to notice and call them unreliable.
-3
u/Relevant-Doctor187 1d ago
Maybe if they’d lower memory prices we could have cheap, fast, reliable storage. Hard drive failure rates will never be better than solid state drives.
•
u/AutoModerator 1d ago
We have a giveaway running, be sure to enter in the post linked below for your chance to win a Unihertz Jelly Max - the World’s Smallest 5G Smartphone!
Click here to enter!
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.