Maker Pro
Maker Pro

1000 year data storage for autonomous robotic facility

  • Thread starter Bernhard Kuemel
  • Start date
C

Clifford Heath

I made the assumption that the robots themselves would be the "read
hardware"

All this prompts the question of whether human culture will last, to
the point that anyone will care about decoding 1's and 0's in 1000yr.

If it does, one might assume that there are times during that period
where interest is sufficient to copy to new or better media.

I still have files that have survived five generations of media tech.
 
A

Arno

In comp.sys.ibm.pc.hardware.storage [email protected] said:
I made the assumption that the robots themselves would be the "read
hardware". I suggested (as an extreme example) a big slab of stone/
osmiridium/whatever inscribed with all the instructions they'd need to
maintain the facility and themselves as literal zeroes and ones, to be
read by the robots themselves.
Crude, but considering the requirements, suitable.

Mark L. Fergerson

At this time, this would either requirting hard-coding all possible
modes this can break down with recovery instructions (infeasible)
or true AI in there (likely infeasible as well). Maybe in a few
hundred years something like this project could be undertaken,
but today they do not even know how to statically mark a
nuclear waste facility in a way suitable for warn people away
for the forseeable future.

Arno
 
Maybe I'm missing something here.

Why would you want to guarantee something for this long.

If you are trying to provide for civilisation disaster recovery then surelyit is better to maintain something that can reboot the world within a generation so that the survivors can remember and therefore understand, the point of putting the effort in. If you wait 50 generations, chances are they will reboot themselves and will not need our stuff.

If you want to get rid of radioactive stuff with a long half life how aboutdesigning a container that can survive a journey to the bottom of the marianna trench.
 
N

Nico Coesel

Bernhard Kuemel said:
Sorry for repost, I posted to sci.electronics before, which does not exist.

Hi!

I'm planning a robotic facility [3] that needs to maintain hardware
(exchange defective parts) autonomously for up to 1000 years. One of the
problems is to maintain firmware and operating systems for this period.
What methods do you think are suitable?

I'd go lo-tech. Etches in stainless steel for example. Maybe gold
plate them afterwards for an extra protective layer.
 
J

josephkk

Also:
<http://www.10000yearclock.net>

Does the 10,000 year clock come with a warranty?

At this time, there is a 16 second difference between GPS atomic clock
time and UTC, which is based on astronomical time.
<http://leapsecond.com/java/gpsclock.htm>
<http://leapsecond.com/java/nixie.htm>
They were identical on Jan 6, 1980 and are diverging at the rate of
about 2 seconds per year. Ignoring variations in the earths speed of
rotation:
<http://tycho.usno.navy.mil/leapsec.html>
in 1000 years, the two clocks might be 2000 seconds or 33 hours
different.

Ummm. There are 3600 seconds in one hour, care to recheck that last
calculation?

?-)
 
J

josephkk

About 1000 years ago, we were just coming out of the dark ages.
Proposing a 1000 year document preservation of e.g. the Library at
Alexandria, would have the equivalent technical limitations as your
current proposal. I doubt that the dark ages religious establishments
could have succeeded given the wide range of totally inconceivable and
unpredictable threats that have arrived in the last 1000 years. It's
equally unlikely that you could defend your data against the next 1000
years of currently known threats, much less the unknown threats. All
it would take is a biological niche to open for bacteria that eats
silicon or lives on Epoxy-B, and your archive is gone.
One more time, OP is talking about a sef-maintaining robot (better make
that a robot society).

?-)
 
P

piglet

I thought about this:
ROMs/PROMs, replacing them when checksum fails.>

ROM/PROM masters, being copied once a year to flash ROM.

1000 flash ROMs, refreshing once a year from the ones that still have a

valid checksum.

Tunnel diodes made in the 1960s are already rediffusing into useless globs of germanium or silicon. I think the dopants making all the p-n junctions in those ROMS you suggest will also rediffuse over the centuries. Semiconductors as we know them might be ruled out unless the robots can operate a fab-line.
 
B

Bernhard Kuemel

At this time, this would either requirting hard-coding all possible
modes this can break down with recovery instructions (infeasible)

My computers POST can tell me if there's a memory, keyboard, floppy,
disk, etc error. What's infeasible about identifying defective parts and
replacing them? Sure there are limits, but we can try to find a solution
with reasonable chance for success.
or true AI in there (likely infeasible as well).

Even humans can't solve all problems.

Bernhard
 
J

josephkk

Thanks. The 2000 seconds is correct. However, it should be 33
minutes, not hours. Sorry(tm).

Da nada. I spent much of my working time in the last 30+ years looking
for others slips and things. That amount of practice must produce some
little bit of skill.

?;-)
 
J

josephkk

That's one way to read between the lines. I just re-read all the OP's
postings in this thread and found in <[email protected]>
"This is about media being used during these 1000 years
as a source of firmware and operating systems to keep the
robotic facility functional."
Note the word "media".

His answer to my comments on the need for data verification in
<[email protected]>:
"The idea is to make the cold store for humans autonomously due
to a lack of trust in human reliability. If the autonomous facility
works, then there is no need for anyone to verify the data.
Verification is done via checksums internally. Ideally it would
be in a remote place, forgotten and eventually discovered, either
by chance, or by radio signals from the facility in case of
malfunction or at a set date like in 1000 year when technology
is expected to be able to scan/upload the minds of the frozen
humans. Actually I think 200 years probably suffice."

One does not normally refer to components, parts, firmware, etc as
"media". I can't tell what he's planning to accomplish with a 1000
year self maintaining robot. The 1000 year self maintaining robot is
not the problem. It's whatever the robot is suppose to be doing for
1000 years is the problem. Again, reading between the lines, it looks
like a robotic Alcor:
<http://www.alcor.org>
or if the body or brain can somehow be reduced to data, a large data
time capsule. It's the political, social, and financial aspects of
operating such a robot, which is what I find interesting.

That puts a really different spin on it. I wonder of OP has real "Silicon
Beach", "The Two Faces of Tomorrow", Dr. Asimov's Robot Series, or any
similar books.

?-)
 
J

josephkk

All this prompts the question of whether human culture will last, to
the point that anyone will care about decoding 1's and 0's in 1000yr.

If it does, one might assume that there are times during that period
where interest is sufficient to copy to new or better media.

I still have files that have survived five generations of media tech.

Wow. Is that from magnetic tape or paper tape?

?-)
 
A

Arno

My computers POST can tell me if there's a memory, keyboard, floppy,
disk, etc error. What's infeasible about identifying defective parts and
replacing them? Sure there are limits, but we can try to find a solution
with reasonable chance for success.

You computer's POST cannot tell you if the POST itself is broken.
It has a table of specific checks (i.e. hardcoded ones), other
problems will not even be atempted to be diagnosed.
Even humans can't solve all problems.

Depends on the intelligence and experience of the human involved.
But you are right, and I have been pointing oout that this
very project is very likely amont the problems humans cannot solve.

Arno
 
R

Rod Speed

All this prompts the question of whether human culture will last, to
the point that anyone will care about decoding 1's and 0's in 1000yr.

Bet it does.
If it does, one might assume that there are times during that period where
interest is sufficient to copy to new or better media.

That didn't happen much at all in the previous 1000 years.
I still have files that have survived five generations of media tech.

You don't see much of that with the previous 1000s of years.
 
R

Rod Speed

I don't know. What I do know is that he's solving the wrong problem.

I don't agree. There is certainly more chance of an autonomous
robotic system lasting for 1000 years than trying to organise some
way of getting humans to maintain his cryogenic body storage
system over that length of time.
I don't believe it's possible to achieve 1000 year reliability
for electronics and mechanisms. If it moves, it breaks...
unless something extraordinary (and expensive) is employed.

He did say that cost was no object.
The list of probable hazards are just too great for such a device.

That stuff doesn't matter if it can repair what breaks.
If a species cannot change and evolve effectively,
environmental changes will guarantee extinction.

That's just plain wrong. There are plenty of
examples of species that have not evolved
at all over 1000 years and have survived fine.
The same can be said of all mechanisms, including electronics.

No, most obviously with the static storage of data
using a sufficiently stable storage mechanism.

That has in fact lasted MUCH longer than 1000
years already with some of the ways of doing that.
Mother nature, Microsoft, and satellite technology have provided
examples the long term survivability that work. Mother nature
offers evolution, where a species adapts to changing conditions.

That's just one way its handled that problem.
Microsoft has Windoze updates, which similarly adapts a know buggy
operating system into a somewhat less buggy operating system.

But hasn't even managed 100 years, let alone 1000.
The satellite industry has deal with the inaccessibility of satellite
firmware and in flight RAM damage with reloadable firmware.

Yes, that's a rather better example, but none of that was ever
designed to last for 1000 years and in fact we know it won't
because the satellites won't even stay there for that long even
if the electronics does work for that long, and we know it wont.
None of the products of these technologies would operate
for very long in their original form without adaptation.

Adaption is just one approach.

Clearly an autonomous robot manufacturing facility can
just keep making more of what fails whenever it fails as
long as the raw materials are always available.
Building a sealed system also has its problems.

All approaches have their problems.

That's why we have engineers, to solve them.

That's just biological systems. Closed data storage systems work fine.
The story is always the same. They get 99.9999% there, and the whole
thing collapses due to some unexpected and uncontrolled trivial oversight.

Doesn't happen with closed data storage systems.
The closest electronic parallel is again the satellite technology,
where environmental considerations (space junk, cosmic rays,
tin whiskers, solar cell deterioration, etc) cannot effectively
be repaired and eventually kill the satellite.

And even if they don't, the satellite will eventually
return to earth and burn up in the process.

That's all irrelevant to whats possible on earth tho.

We know that there are plenty of examples of
stuff that's lasted a lot longer than 1000 years.
Sometimes, it is politically expedient to spend huge amounts of money
to repair satellites (i.e. Hubble space telescope), but those are rare.
If
Hubble had been in geosynchronous orbit, the space shuttle would not
have been able to reach it and Hubble would have died on arrival.

All irrelevant to whats feasible on earth.
Therefore, in my never humble opinion, the trick to making
electronics survive beyond their "normal" lifetimes is to perform
constant and regular updates. That doesn't mean an infinite
supply of spare parts or 3D printing extended to its logical
extreme. It means small but constant improvements in the design.

That's just one way. Even just replacement
of what dies is another obvious approach.
For firmware, that could be improvements through self
modifying code as in "The Adolescence of P1".
<http://en.wikipedia.org/wiki/The_Adolescence_of_P-1>

No need to improve it.
The trick is to follow the example of evolution and not make
any radical changes. The risk of failure with small changes are
small and reversible. The risks with dramatic improvements in
technology are large and probably not reversible.

It would be a hell of a lot safer to not even attempt
any improvements, just replace what dies.
Applied to the OP's automated Alcor system is difficult, but not
impossible. For example, parallel redundancy is an obvious way to
improve reliability, but also a good way to implement evolutionary
electronics. If there are 10 processors running majority logic to
reach a decision or perform a function, there would not be a loss
of function if one of those processors engaged in evolutionary
experiments and improvements. If the code or hardware changes are
successful, then the remaining 9 processors could be slowly replaced.

Still a lot safer to just replace what dies.
Exactly how create evolutionary electronics is probably worthy of a
Nobel Prize. It may also be our doom as it would likely involve risks
such as nano technology "gray goo" or a Forbin Project style computer
takeover.
<http://en.wikipedia.org/wiki/Forbin_Project>
The principle is simple enough, but the devil is in all the details.
In its fully automated form, it also can be capable of initiating
resource exhaustion. If it needs some rare earth element to function,

In fact the rare earths arent actually rare at all.
and it has access to the commodities futures market computers,
it could easily corner the market in that element for itself. Lots of
other things that could go wrong.

But not if you just replace what breaks and have enough of the
raw materials included in the original that you have calculated
will be needed to replace what breaks and say have 10 times that
for safety.
The technology to make evolutionary computing
work is well beyond my level of expertise.

But replacing what breaks isnt.
I suspect it's going to be a priority if we ever establish
space colonies as the problems are similar. What I do
know is that building something with a 1000 year
reliability is not going to be a usable solution.

Its worked fine quite a few times in the past now.
 
J

Jeroen

I don't know. What I do know is that he's solving the wrong problem.
I don't believe it's possible to achieve 1000 year reliability for
electronics and mechanisms. If it moves, it breaks... unless
something extraordinary (and expensive) is employed. The list of
probable hazards are just too great for such a device. If a species
cannot change and evolve effectively, environmental changes will
guarantee extinction. The same can be said of all mechanisms,
including electronics.

Mother nature, Microsoft, and satellite technology have provided
examples the long term survivability that work. Mother nature offers
evolution, where a species adapts to changing conditions. Microsoft
has Windoze updates, which similarly adapts a know buggy operating
system into a somewhat less buggy operating system.

I beg to differ! Somewhat different bugs, sure. Somewhat less buggy,
surely not!

Jeroen Belleman
 
J

josephkk

As for long-term storage of information, two thoughts come to mind.
First, for the millennium celebrations, The New York Times decided to
make and widely disperse a umber to time capsules; these being
intended to be opened 1,000 years hence. Basically nothing worked
except nickel sheets with natural-language texts engraved into the
surface using an electron beam. The text was rendered in English and a
number of other languages, so it would also serve as a Rosetta Stone.
Anyway, the whole process was described in a set of articles in the NYT
Magazine published in 1999.

More recently, the ability to convert arbitrary text into DNA, and to
read the text back gives us a way to store huge amounts of binary
information for millennia.

One can also store bulk binary on nickel sheet by writing code blocks
in hexadecimal, with embedded error correcting codes.

Joe Gwinn


Some links:

<http://www.nytimes.com/1999/12/02/arts/design-is-selected-for-times-cap
sule.html>

<http://www.nytimes.com/1999/12/05/magazine/how-to-make-a-time-capsule.h
tml?pagewanted=all&src=pm>

<http://online.wsj.com/article/SB100014241278873245393045782598835075431
50.html>

<http://www.nature.com/nature/journal/vaop/ncurrent/full/nature11875.htm
l>


All but the last link broke due to word wrap. If you can't read headers i
am using Agent 6. One of the better respected news (and email) clients.

Carets "<>" ain't a perfect solution. Fortunately Jow Gwinn also placed
them on separate lines making using text selection reasonable easy. Using
copy and paste worked just fine.

?-)
 
J

josephkk

Tunnel diodes made in the 1960s are already rediffusing into useless globs of germanium or silicon. I think the dopants making all the p-n junctions in those ROMS you suggest will also rediffuse over the centuries. Semiconductors as we know them might be ruled out unless the robots can operate a fab-line.

Yes, also mining and smelting of silicon and the various dopants like,
boron, phosphorus, gallium, arsenic, indium, antinomy and so on. This is
getting to need a small very technical society here (a point i tried to
make before).

?-)
 
B

Bernhard Kuemel

Actually, the biggest problem are the human operators. The Three Mile
Island and Chernobyl reactor meltdowns comes to mind, where the humans
involved made things worse by their attempts to fix things. Yeah,
maybe autonomous would better than human maintained.


I once worked on a cost plus project, which is essentially an
unlimited cost system. They pulled the plug before we even got
started because we had exceeded some unstated limit. There's no such
thing as "cost is no object".

I said: "Price is not a big issue, if necessary." I know it's gonna be
expensive and we certainly need custom designed parts, but a whole
semiconductor fab and developing radically new semiconductors are
probably beyond our limits.
Repair how and using what materials?

Have the robots fetch a spare part from the storage and replace it.
Circuit boards, CPUs, connectors, cameras, motors, gears, galvanic
cells/membranes of the vanadium redox flow batteries, thermo couples,
etc. They need to be designed and arranged so the robots can replace them.
Ok, lets see if that works. The typical small signal transistor has
an MTBF of 300,000 to 600,000 hrs or 34 to 72 years. I'll call it 50
year so I can do the math without finding my calculator. MTBF (mean
time between failures) does not predict the life of the device, but
merely predicts the interval at which failures might be expected. So,
for the 1000 year life of this device, a single common signal
transistor would be expected to blow up 200 times. Assuming the robot
has about 1000 such transistors, you would need 200,000 spares to make
this work. You can increase the MTBF using design methods common in
satellite work, but at best, you might be able to increase it to a few
million hours.

It's quite common that normal computer parts work 10 years. High
reliability parts probably last 50 years. Keep 100 spare parts of each
and they last 1000 years, if they don't deteriorate in storage.

Also robots are usually idle and only active when there's something to
replace. The power supply, LN2 generator and sensors are more active.

I wonder how reliable rails or overhead cranes that carry robots and
parts around are. If replacing rails or overhead crane beams is
necessary and unfeasible, the robots will probably drive with wheels.
Geosynchronous satellites are unlikely to suffer from serious orbital
decay. However, they have been known to drift out of their assigned
orbital slot due to various failures. Unlike LEO and MEO, their
useful life is not dictated by orbital decay. So, why are they not
designed to last more than about 30 years?

Because we evolve. We update TV systems, switch from analog to digital
etc. My cryo store just needs to the same thing for a long time.
At the risk of being repetitive, the reason that one needs to improve
firmware over a 1000 year time span is to allow it to adapt to
unpredictable and changing conditions.

Initially there will be humans verifying how the cryo store does and
improve soft/firmware and probably some hardware, too, but there may
well be a point where they are no longer available. Then it shall
continue autonomously.
True. However, not providing a means of improving or adapting the
system to changing conditions will relegate this machine to the junk
yard in a fairly short time. All it takes is one hiccup or
environmental "leak", that wasn't considered by the designers, and
it's dead.

Yes. We need to consider very thoroughly every failure mode. And when
something unexpected happens, the cryo facility will call for help via
radio/internet. I even thought of serving live video of the facility so
it remains popular and people might call the cops if someone tries to
harm it. Volunteers could fix bugs or implement hard/software for not
considered failure modes.
 
J

Jasen Betts

I said: "Price is not a big issue, if necessary." I know it's gonna be
expensive and we certainly need custom designed parts, but a whole
semiconductor fab and developing radically new semiconductors are
probably beyond our limits.

so you're prepared to take the performance hit and use thermionics
instead? it's not like they're going to work after 1000 years either
(unless perhaps stored in a vacuum.) but the fab is simpler.
Have the robots fetch a spare part from the storage and replace it.
Circuit boards, CPUs, connectors, cameras, motors, gears, galvanic
cells/membranes of the vanadium redox flow batteries, thermo couples,
etc. They need to be designed and arranged so the robots can replace them.

where are you going to get a cpu in 700 years time? the ones in the
store will be have diffused away.
It's quite common that normal computer parts work 10 years. High
reliability parts probably last 50 years. Keep 100 spare parts of each
and they last 1000 years, if they don't deteriorate in storage.

the problem is that they do... keeping them on ice (or in liquid helium)
might help enough
Also robots are usually idle and only active when there's something to
replace. The power supply, LN2 generator and sensors are more active.

I wonder how reliable rails or overhead cranes that carry robots and
parts around are. If replacing rails or overhead crane beams is
necessary and unfeasible, the robots will probably drive with wheels.

build them from stainless steel and the tracks thick enough to withstand 1000
years traffic, perhaps have a few spare cranes parked at the end of
the track.
Because we evolve. We update TV systems, switch from analog to digital
etc. My cryo store just needs to the same thing for a long time.

So in 1000 years the robots have sold their charges on ebay and are
playing online poker to buy electricity.
 
R

Rod Speed

Ok, not the best example.

Yeah, its nothing like evolution in fact.
The problem is that features and functions
get added to software faster than bug fixes.

The real problem is that computing is so complicated that
even with a system that never gets anything added at all,
you can never get rid of all bugs, most obviously with
those that are only seen in the most unusual situations.

And its just not possible to debug something
like the 2000 'bug' until it occurs either.
The inevitable result is a product with cancerous growth,

That's silly.
feature bloat,

Yes, what people want to do keeps being invented.

Most obviously currently with air gestures with touch systems.
and plenty of bugs.

Because computing is so complex.
I sometime suspect that this is intentional,

Yes, that's certainly true with new features.
as the only reason the users upgrade to the next latest
version is in the futile hope that it will have fewer bugs.

That's just plain wrong. Quite a few of them upgrade to
get stuff that isnt in what they currently have, or is much
better done than in what they currently have. That's been
seen with USB and wifi alone.
MS made that mistake with XP,

No, just fucked up some of the stuff done with Vista that XP doesn't have.
which is actually quite good and reasonable usable,

But can obviously be improved on.
causing large corporate users to ask "why bother to upgrade"?

Plenty can see good reasons to upgrade.
13 years later, it's still going strong,

But Win7 is a lot better in a great raft of areas.
despite numerous failed attempts by MS to kill it.

MS has not tried to kill it, just tried to encourage
people to upgrade. They would be stupid if they
did not given where their revenue comes from.
Certainly, they're not going to make that mistake again.

They already did with Win8.
However, I'll make it easy for someone to prove me wrong.
Just name me one software package or major application
that either continues to be sold in its original version,

Even if someone could do that, it proves nothing about
your original claim. Its just a fact of life with revenues.

We see the same thing with cars and almost everything else.

There is even still some product improvement
with stuff as basic as cutlery, for a reason.
or which has become smaller, faster, or both?
Win7.

Offhand, I can't think of any that are even close.

Then you need to get out more.
Evolution and growth drives the software industry, because it works.

Evolution drives the software industry because that's what produces revenue.

It even happens with stuff like Linux which doesn't have any revenue, for a
reason.
 
Top