Maker Pro
Maker Pro

Are SSDs always rubbish under winXP?

W

WoolyBully

Flash *could* be made reliable, but the fabbers go for density. Once
they get a really reliable cell designed, they scale it down and
multi-level it until it's flakey again.

You're a dork.
 
W

WoolyBully

And you're AlwaysWrong.

Only a 12 year old mind needs to call names.

My description of you is not a name, it is a behavioral observation.
That is beside the fact that you do not know the first thing about how
they are making memory arrays these days, much less any reliability
figures, you lying POS.
 
T

Tom Del Rosso

Don said:
Writing to "the same spot" rarely happens inside the SSD. The
internal controller tries to shift the writes around to provide some
degree of wear leveling. Even writing the same "sector" (viewed from
the disk API) can result in totally different memory cells being
accessed.

One thing that I'd like to know is, how does it store the allocation table?
That must be in flash too, or maybe EEPROM where cells are not paged, but
they still have a write-cycle lifetime.
 
D

Don Y

Hi Tom,

One thing that I'd like to know is, how does it store the allocation table?
That must be in flash too, or maybe EEPROM where cells are not paged, but
they still have a write-cycle lifetime.

They cheat! :> What you think of as the "FAT" is actually stored in
RAM (!) It is recreated at power-up by the controller (or "processor"
if you are using bare flash). This involves *scanning* the contents of
the FLASH device and piecing together *distributed* bits of information
that, in concert, let an *algorithm* determine what is where (and which
"copy" is most current, etc.)

When you think about storage devices with non-uniform access
characteristics (in this case, much longer write times than
read times, durability, etc.), you can't approach the problem
with conventional thinking (well, you *could* but you'd end up
with really lousy implementations! :> ).

Think of how you would implement a *counter* in an EPROM, bipolar
ROM, etc.

For simplicity, assume an erased device starts out as "0".
So, a byte starts as 0000 0000b. You increment it to 0000 0001b.
Now, you go to increment it a *second* time (to 0000 0010b) only
to discover that you can't "write a 0" -- that would require ERASING
the LSb.

OTOH, if you count differently:
0 0000 0000
1 0000 0001
2 0000 0011
3 0000 0111
4 0000 1111
5 0001 1111
6 0011 1111
7 0111 1111
8 1111 1111
You don't have to "erase" as often as you would, otherwise:
0 0000 0000
1 0000 0001
2 0000 0010 ERASE
3 0000 0011
4 0000 0100 ERASE
5 0000 0101
6 0000 0110 ERASE
7 0000 0111
8 0000 1000 ERASE
The fact that the ERASED flash cells (assuming 0's, here) can
be written with 1's -- and, that 1's can *also* be written with
1's (!) means you can skip the erase if you can exploit that
aspect.

The goal is to minimize the number of erases.

The same idea is leveraged when storing the mapping structures
inside the FLASH. E.g., instead of OVERWRITING an existing
entry in the FAT with a "new" value, just *clobber* it!
(using the convention above, that would be like writing all
1's over an existing datum). The software then steps *over*
that entry when scanning the structure.

Eventually, when all of the entries in a "page" (I'm playing
fast and loose with terminology, here) are "clobbered", the
page can be ERASED and treated as a *clean* page to be reused
by "whatever".

[This is a real back-of-the-napkin explanation. There is a
lot more BFM involved. But, it's all painfully obvious -- once
you've *seen* it! :> Bugs creep in because you are juggling
several criteria at the same time -- trying to relocate data
to other pages, tracking which blocks are in which pages,
keeping track of how many erase cycles a page has experienced,
harvesting pages ready for erasure, etc. And, at the same
time, you are trying to make educated guesses at how best
to provide "performance" ("Do I erase this page *now* and
cause the current operation to wait? Or, do I defer that to
some later time in the hope that I can sneak it in "for free",
etc.]
 
A

amdx

There is your problem. You are so stupid that you would pay that much
for so little.

Hector is angry, Hector is angry.
Get a life Hector.
Mikek
 
J

Jasen Betts

One thing that I'd like to know is, how does it store the allocation table?
That must be in flash too, or maybe EEPROM where cells are not paged, but
they still have a write-cycle lifetime.

All blocks are subject to wear leveling.
that includes the FAT (if you use a filesystem that works that way)

The wear-leveling is hidden from the operating system.
 
G

Guest

On Sat, 25 Feb 2012 22:39:52 +0000, Peter ([email protected]) wrote:

[snip]
OK, but why does anybody use an SSD?

Audible noise, power consumption, shock resistance - at least those were
the criteria that drove our decision to use them. This is for an
embedded system though, not a desktop PC.
I used them to make a hopefully silent PC, or one drawing little
power. Or, in portable apps, to make a tablet computer work above
13000ft in an unpressurised aircraft
http://www.peter2000.co.uk/ls800/index.html

Have a look at Microsoft's Enhanced Write Filter. As far as I know it's
only available as a component for the embedded versions of Windows - we
have used it for XP and 7 so far.
Our usage model is to partition the drive into two, then write-protect
the C: drive using EWF and write all the application data to D:. All
writes to C: are cached in RAM and get lost on power-off. This works
fine for our usage pattern, the machine is not networked and the end
user is not expected to, or allowed to make any changes to the OS.

Based on our experiences with SSDs I'd never use one for the primary
storage for a desktop PC - and this was with industrial-grade single-
level cell devices, not cheap commodity MLC stuff.
 
T

Tom Del Rosso

Jasen said:
All blocks are subject to wear leveling.
that includes the FAT (if you use a filesystem that works that way)

The wear-leveling is hidden from the operating system.

Of course I realize it's hidden from the OS. I said allocation table as in
generic index, not FAT as in file system. But Don explained that it doesn't
use a table at all.
 
T

Tom Del Rosso

Don said:
Hi Tom,

They cheat! :> What you think of as the "FAT" is actually stored in
RAM (!) It is recreated at power-up by the controller (or "processor"
if you are using bare flash). This involves *scanning* the contents
of the FLASH device and piecing together *distributed* bits of
information that, in concert, let an *algorithm* determine what is
where (and which "copy" is most current, etc.)
Thanks.


Think of how you would implement a *counter* in an EPROM, bipolar
ROM, etc.

I remember a car odometer in the 80's that used PROM. I assumed it used a
similar method.
 
D

Don Y

Hi Tom,

Don Y wrote:

I remember a car odometer in the 80's that used PROM. I assumed it used a
similar method.

Probably.

Ages ago (dealing with very *slow* EPROMs), I used to put different
options (that I wanted to test/evaluate) in the binary image using a
form similar to:

LD <register>,Value6
LD <register>,Value5
LD <register>,Value4
LD <register>,Value3
LD <register>,Value2
LD <register>,Value1
DoSomething <register>

Then, after evaluating how the code runs with "Value1", pull the
EPROM and overwrite "LD <register>,Value1" with (effectively) "NoOps"
to see how the code runs with "Value2". Lather, Rinse, Repeat.

(this assumes the opcode for that effective NoOp is more dominant
than the "LD", etc. Obviously, there are other ways of getting
similar results depending on what your actual opcodes are).
 
N

Nico Coesel

Don Y said:
Hi Peter,



Note that it's not 24/7(/365) that kills the drive but, rather, the
amount of data *written* to the drive, in total. For a reasonably
high traffic, COTS (i.e., not designed with SSDs in mind) server
application, 2-3 years is probably a *high* number!


Understanding what's going on under the hood of a SSD is usually
harder than an equivalent (magnetic) "hard disk".

Writing to "the same spot" rarely happens inside the SSD. The internal
controller tries to shift the writes around to provide some degree of
wear leveling. Even writing the same "sector" (viewed from the disk
API) can result in totally different memory cells being accessed.

Typically, wear leveling is confined to those parts of the medium that
are "free"/available. So, one good predictor of SSD life is "amount
of UNUSED space" coupled with "frequency of writes". Note that vendors
can cheat and make their performance data look better by packaging a
larger drive in a deliberately "derated" specification. E.g., putting
300G of FLASH in a 250G drive and NOT LETTING YOU SEE the extra 50G!
(but the wear-leveling controller inside *does* see it -- and EXPLOITS
it!)

The part of the flash which can be used as empty space is very small
if not zero. Vendors want to sell big SSDs and put as little flash in
there as they can get away with. However for the wear leveling to
actually work the SSD needs to know which space is really free. This
is why SSDs have extra commands which allow the OS to tell the SSD
which sectors are in use and which are not.

Windows XP does not have this feature so what you'll see is that the
SSD will get slower over time and 'die' prematurely.
 
D

Don Y

The part of the flash which can be used as empty space is very small
if not zero.

That's not true. (well, perhaps for low end "consumer" kit it might be)
Manufacturers can (and do) set aside extra flash capacity that is not
"user accessible/visible". This is called "overprovisioning". It is
a common way to increase performance (not just durability) of enterprise
class devices.

Some devices may have as much as 30% or 40% *extra* (theoretical)
"storage capacity" within the drive. Some SSD manufacturers have
mechanisms (I want to say "provisions" but that would be a bad
choice of words :> ) by which the user can customize the extent
of the "overprovisioning". *Roughly* speaking, this is equivalent
to low-level formatting the drive for a capacity less than its
actual capacity. The excess capacity allows the controller within
the drive more flexibility in replacing/remapping "blocks" to
enhance durability, etc.

Expect future devices to go to even greater lengths trying to
"move stuff around". E.g., the controller can, theoretically,
take *any* "block" and move it to anywhere else as it sees fit
as long as it keeps track of what it has done (and, does so in
a manner that doesn't allow the block to "disappear" in the
process -- imagine if it marked the original copy of the data
as "deleted" and, for some misfortune, could NOT later store
the moved copy in its intended NEW location)
Vendors want to sell big SSDs and put as little flash in
there as they can get away with.

They have two pressures working on them: one is to drive price
per GB down -- so use exactly as much flash as you claim to have
in the device; the other is to get reliability/durability/performance
*up* -- so, put EXTRA flash in the device and don't tell anyone
it's there (or, let *them* make that decision).

There's nothing new about this sort of practice. How many
"industrial" temp range devices actually have different processes
and geometries than their "commercial" counterparts? How many
"high reliability" devices have just undergone a more thorough
shake'n'bake before sale? etc.
 
J

josephkk

You probably don't write terabytes to them though. Also you are
extremely unlikely to ever go anywhere near even a very low write
cycle limit (1000+) with a removable drive. In most usage, one does
just ONE write to the device, in each use.

So my other post re Intel SSD write limits. They are very suprisingly
low.


OK, but why does anybody use an SSD?

I used them to make a hopefully silent PC, or one drawing little
power. Or, in portable apps, to make a tablet computer work above
13000ft in an unpressurised aircraft
http://www.peter2000.co.uk/ls800/index.html

Combining a HD with an SSD defeats both those things.

In actual usage, I find, the SSD outperforms a HD very noticeably in
very narrow/specific apps only, which tend to be

- a complex app, comprising of hundreds of files, loading up, perhaps
involving loading up thousands of records from a complicated database
into RAM

- any app doing masses of random database reads

Anything involving writing is usually slower, and anything involving
sequential reading is no quicker.

I get very different results myself. If the writes are reasonable low
proportion, no speed loss. Bulk file copy to SSD very fast, much better
than rotating media.

If you can afford enough SSD i think they would make really great backup
media. Hmmm. maybe they already do. Tapes be expensive, and duplicate
HD are fragile.

?-)
 
D

Don Y

Hi Joseph,

I get very different results myself. If the writes are reasonable low
proportion, no speed loss. Bulk file copy to SSD very fast, much better
than rotating media.

The nature of the transaction has a LOT to do with performance. Thus,
how well a particular SSD implementation may perform in a particular
application domain!

E.g., streaming video to/from a SSD tends to involve lots of sequential
accesses. An SSD can interleave the actual flash devices *inside* so
that two, three, ten, etc. writes can be active at a given time!
(not counting the RAM cache that is usually present).

OTOH, writing a "disk block" and then trying to write it again
"immediately" could give you atrocious results (perhaps not in the
trivial case but you should be able to see where this line of
reasoning is headed)
If you can afford enough SSD i think they would make really great backup
media. Hmmm. maybe they already do. Tapes be expensive, and duplicate
HD are fragile.

I think SSDs are still too small for useful backup media (given that
economy goes with scale... when you can put 4GB on a ~$0.00 DVD its
awful hard to compete with pricey SSDs anywhere south of a TB)

I still prefer tape -- though I use large disks, optical media *and*
tape for different types of backups (I keep three copies of everything
of importance -- though often have only *one* copy of something
that is "current" :< )

In addition to "fragile" (some tapes are actually *more* fragile!),
I dislike disks because it is too easy for a single failure to
wipe out the entire volume! You can't just pull the platters
out and stuff them into another "transport" like you can with other
removable media (tape, optical, etc.).
 
U

UltimatePatriot

imagine if it marked the original copy of the data
as "deleted" and, for some misfortune, could NOT later store
the moved copy in its intended NEW location)


Very bad. No mission critical use there.

"That's not a knife..."
 
C

Copacetic

E.g., streaming video to/from a SSD tends to involve lots of sequential
accesses. An SSD can interleave the actual flash devices *inside* so
that two, three, ten, etc. writes can be active at a given time!
(not counting the RAM cache that is usually present).

Question:

On a Micro SDHC (I know, not an SSD, per se) would these 'smarts' not
have to be built into the chip device itself, as one could not expect
every camera or OS to all dance the same?

So, the 'watchdog''chip' would be right there with the array, in
hardware, even if some newer ones are "adjustable"?
 
C

Corbomite Carrie

OTOH, writing a "disk block" and then trying to write it again
"immediately" could give you atrocious results (perhaps not in the
trivial case but you should be able to see where this line of
reasoning is headed)

"*I* am Capt. Kirk!"

"NO!... *I* AM CAPT. KIRK!"

(fight it out)
 
Top