W
WoolyBully
Flash *could* be made reliable, but the fabbers go for density. Once
they get a really reliable cell designed, they scale it down and
multi-level it until it's flakey again.
You're a dork.
Flash *could* be made reliable, but the fabbers go for density. Once
they get a really reliable cell designed, they scale it down and
multi-level it until it's flakey again.
And you're AlwaysWrong.
Don said:Writing to "the same spot" rarely happens inside the SSD. The
internal controller tries to shift the writes around to provide some
degree of wear leveling. Even writing the same "sector" (viewed from
the disk API) can result in totally different memory cells being
accessed.
One thing that I'd like to know is, how does it store the allocation table?
That must be in flash too, or maybe EEPROM where cells are not paged, but
they still have a write-cycle lifetime.
There is your problem. You are so stupid that you would pay that much
for so little.
Hector is angry, Hector is angry.
Get a life Hector.
One thing that I'd like to know is, how does it store the allocation table?
That must be in flash too, or maybe EEPROM where cells are not paged, but
they still have a write-cycle lifetime.
OK, but why does anybody use an SSD?
I used them to make a hopefully silent PC, or one drawing little
power. Or, in portable apps, to make a tablet computer work above
13000ft in an unpressurised aircraft
http://www.peter2000.co.uk/ls800/index.html
Jasen said:All blocks are subject to wear leveling.
that includes the FAT (if you use a filesystem that works that way)
The wear-leveling is hidden from the operating system.
Don said:Hi Tom,
They cheat! :> What you think of as the "FAT" is actually stored in
RAM (!) It is recreated at power-up by the controller (or "processor"
if you are using bare flash). This involves *scanning* the contents
of the FLASH device and piecing together *distributed* bits of
information that, in concert, let an *algorithm* determine what is
where (and which "copy" is most current, etc.)
Thanks.
Think of how you would implement a *counter* in an EPROM, bipolar
ROM, etc.
Don Y wrote:
I remember a car odometer in the 80's that used PROM. I assumed it used a
similar method.
Don Y said:Hi Peter,
Note that it's not 24/7(/365) that kills the drive but, rather, the
amount of data *written* to the drive, in total. For a reasonably
high traffic, COTS (i.e., not designed with SSDs in mind) server
application, 2-3 years is probably a *high* number!
Understanding what's going on under the hood of a SSD is usually
harder than an equivalent (magnetic) "hard disk".
Writing to "the same spot" rarely happens inside the SSD. The internal
controller tries to shift the writes around to provide some degree of
wear leveling. Even writing the same "sector" (viewed from the disk
API) can result in totally different memory cells being accessed.
Typically, wear leveling is confined to those parts of the medium that
are "free"/available. So, one good predictor of SSD life is "amount
of UNUSED space" coupled with "frequency of writes". Note that vendors
can cheat and make their performance data look better by packaging a
larger drive in a deliberately "derated" specification. E.g., putting
300G of FLASH in a 250G drive and NOT LETTING YOU SEE the extra 50G!
(but the wear-leveling controller inside *does* see it -- and EXPLOITS
it!)
The part of the flash which can be used as empty space is very small
if not zero.
Vendors want to sell big SSDs and put as little flash in
there as they can get away with.
You probably don't write terabytes to them though. Also you are
extremely unlikely to ever go anywhere near even a very low write
cycle limit (1000+) with a removable drive. In most usage, one does
just ONE write to the device, in each use.
So my other post re Intel SSD write limits. They are very suprisingly
low.
OK, but why does anybody use an SSD?
I used them to make a hopefully silent PC, or one drawing little
power. Or, in portable apps, to make a tablet computer work above
13000ft in an unpressurised aircraft
http://www.peter2000.co.uk/ls800/index.html
Combining a HD with an SSD defeats both those things.
In actual usage, I find, the SSD outperforms a HD very noticeably in
very narrow/specific apps only, which tend to be
- a complex app, comprising of hundreds of files, loading up, perhaps
involving loading up thousands of records from a complicated database
into RAM
- any app doing masses of random database reads
Anything involving writing is usually slower, and anything involving
sequential reading is no quicker.
I get very different results myself. If the writes are reasonable low
proportion, no speed loss. Bulk file copy to SSD very fast, much better
than rotating media.
If you can afford enough SSD i think they would make really great backup
media. Hmmm. maybe they already do. Tapes be expensive, and duplicate
HD are fragile.
Wrong again.
imagine if it marked the original copy of the data
as "deleted" and, for some misfortune, could NOT later store
the moved copy in its intended NEW location)
E.g., streaming video to/from a SSD tends to involve lots of sequential
accesses. An SSD can interleave the actual flash devices *inside* so
that two, three, ten, etc. writes can be active at a given time!
(not counting the RAM cache that is usually present).
OTOH, writing a "disk block" and then trying to write it again
"immediately" could give you atrocious results (perhaps not in the
trivial case but you should be able to see where this line of
reasoning is headed)
On Mon, 27 Feb 2012 21:17:20 -0800, josephkk
Isn't that what you just did?