Maker Pro
Maker Pro

My Vintage Dream PC

I

ItsASecretDummy

The idea of casually shipping rev 2.3.04b is a nightmare.

Deviation approvals are a pain in the ass too.

That is why full revs are utilized often, and also why the number climbs
quickly until the final design cycles have been run through.

Mid rev shipments MUST show as deviant from the final design or you can
experience serious field problems that in some cases could cause serious
failure or repair mode problems at inopportune times.
 
S

StickThatInYourPipeAndSmokeIt

He automatically argues with people, insulting them and claiming
superior knowledge. But since he's AlwaysWrong, we can just have fun
with him.

No. I argue with a select group of assholes that have been targeting
me for the last few years. You are one such asshole, asshole.
 
F

FatBytestard

But no two Windows systems are ever the same. My home system is a pure
disk image of the one I run at work, so both are pretty good!


If that hardware is not identical, the OS is not. It does not matter
what image you started with. As soon as you perform one update, your
drivers get more local hardware specific, or have you been so MS hating
dipass for so many years that you did not know that MS actually searches
hardware manufacturer sites for driver updates?
 
F

FatBytestard

We're designing a VLIW (48 bit instructions) computing engine to run
in an FPGA. It currently has two opcodes, MOVE and WAIT. I'm trying to
talk my FPGA guy into making it one opcode, MOVE. That way, we
wouldn't need an opcode field at all.

John
AlwaysWaiting rest state. Then only need MOVE to function both ways.
 
F

FatBytestard

I'd checksum them though. That way one could do a quick check now and
again. I remember on company that lost it's data even though it was
backed up to two drives, because the writing machine had a bad SCSI
board.


Full write with full verify against original data.
 
F

FatBytestard

Chez Watt?

It's either zero or it's non zero. If you a one in a million guy in
China there are a thousand people just like you.


I am one from upwards of twenty billion, when one considers once living
souls. Perhaps even more.

Hard drive on-drive-electronics does a pretty good job of checking
EVERY write, and examining the sector condition, and automatically
mapping out bad sectors. With the SMART turned on, before the drive
dies, one usually gets a warning, if one watches one's BIOS screen during
boot like one should.

So it is pretty much zero chance to write it wrong and have it get
reported as having been written correctly. Particularly since the write
gets checked internally at the hardware level by the drive itself,
regardless of the file system or control that performed the write. The
odds are astronomical, in fact. That is the whole idea with mass storage
of data.

We have gotten better and better at it all along. Look at the current
densities. They shattered records!

As a counter, examine the way a floppy drive works on current mass
produced, mass quick formatted cheap media. It is hard to maintain a
drive that works well, and it is hard to keep the discs' data integrity
up.

My 2.88 drives or media NEVER fail, but finding BIOS support for it is
no longer existent for the most part.
 
F

FatBytestard

I guess my point, coming full circle, is that a "big" OS has millions
of lines of code, written by hundreds of people, running at the "ring
0" level where any error crashes the whole system. A secure OS must be
atiny chunk of code, absolutely insulated from damage by more complex,
less reliable stuff.

You've lost grasp of the computing paradigm. You're in left field.
Your on layer 8.

What you describe is an arbitrator that decides whether requestors get
their requests filled.

It would have to be elaborate to know how to make those decisions.

We already have this in place. They are called Microprocessors.

To create an arbitrator for user and network and system requests as an
OS function... well... we have that already too. Several flavors. Made
by engineers way smarter than you that have thought through way more
things than you have. We even have varieties that get written as well as
fixed by its own user base.

Y'all need to get off this Obama mentality "It's all wrong and we need
to fix it" crap. It is eroding your brains.
 
F

FatBytestard

Yup. If some cores were device controllers, and only they could do,
say, DMA or tcp/ip, some things like system-takeover buffer overflow
exploits might be made impossible.

John

I said that days ago.
 
A

Archimedes' Lever

On Wed, 03 Jun 2009 17:21:33 -0700, Archimedes' Lever




No wonder you refuse to take the puzzle challenge. You only want to
have the answer before you try. No risk. No guts. No brains. All
full of it.

You are the RETARD that dosn't understand the Kobayashi Maru.

You see, it is you that is in an un-winnable situation.
 
A

Archimedes' Lever

The hardware is exactly identical. HP ProLiant servers, hot-plug, ECC
ram, redundant bios, redundant power supplies and fans, hot-plug RAID.
I bought 16 of them so all of our machines are the same. With about 20
machine-years so far, absolutely nothing has gone wrong.

Hahahahaha! Some P.T. Barnum dude talked you into buying 16 older model
servers,touting them as "so reliable". Oooooh... they gotz ECC RAM. Oh
boy, Servers as desktop workstations! You are a true computing hero! NOT!

You really are a sucker, I am quite sure.

I could have put together 20 machines that were just as reliable, while
being at least half again faster. Guaranteed.
 
K

Kim Enkovaara

jmfbahciv said:
None of the above. I was thinking about xray scanning at
airports, etc.

Xray uses quite low energy fields. Never had any problems with my mobile
for example while it has gone trough xray.

In satellites the energy of particles is much higher and they still
work just fine. It's just a question of design and compromises
between cost and reliability, nothing comes for free.
I have no faith in EMC requirements. I had a stove in Southboro
which had to be unplugged in order to tune the radio to any AM
station. The tech who was called about this problem made the
comment, "Nobody listens to AM radio anymore."

that's what the kiddies are getting taught these days.

The EMC requirements are not defined by kiddies but standards
organizations. And the tests must be made in accredited labs
to get the paperwork done and permit to sell the products.
Especially for telecoms equipment the limits are not always easy
to achieve.

The kiddie might have been someone without only knowledge how to
connect the stove, and read from manual ready made answers, and
imagine somethingg if the manual has no answer. You don't put real
engineers to call to customers. They are shielded by many layers of
customer service and escalation protocols.

Was this the house you had all kinds of wacky problems with
the electical connections. Maybe the stove was just not correctly
connected (grounded for example), and the filtering did not work.

--Kim
 
K

Kim Enkovaara

jmfbahciv said:
the problem I'm seeing here (with the people who are posting) is that
they are limited to thinking in single-user, single-owner systems,
where single-user implies a single job or task and not multiple
tasks/human being.

Single user, single owner systems are the most common ones, but they
still use multiple tasks in parallel. But this is old thread, you just
don't belive that anything made during last 20 years can support many
users or tasks. Don't bring windows 3.1 to the discussion like always,
it is ancient already...

I have used unix machines that have 500-1000 users in at the same time,
each user has had tens of tasks in parallel etc. But maybe I have been
dreaming. My XP in front of me has had few months of uptime, runs many
processes in parallel all the time. I have X emulation on the other
monitor that is connected to big linux cluster that runs all the time
hunderds of users doing parallel tasks etc.

It's you that is still thinking that single task is the norm, it is not.

--Kim
 
K

Kim Enkovaara

JosephKK said:
And just exactly how do you decide which CPU is faulty?

If two CPUs are used it needs internal diagnostics for the
detection. If Three CPUs are used they can vote who is incorrect.
In the most critical systems there are three parallel systems,
implemented to different platforms by multiple independent teams. The
results from the systems go trough the voting process.

--Kim
 
F

FatBytestard

No. New backup written on fresh media, per your description. Restore
would be a different case.
Learn to follow threads. He is talking about hard drive media.
 
R

Richard Cranium

Hahahahaha! Some P.T. Barnum dude talked you into buying 16 older model
servers,touting them as "so reliable". Oooooh... they gotz ECC RAM. Oh
boy, Servers as desktop workstations! You are a true computing hero! NOT!

You really are a sucker, I am quite sure.

I could have put together 20 machines that were just as reliable, while
being at least half again faster. Guaranteed.


Of course you could have done it Archie .. you're a genius. You're
the pinnacle of success in any topic that comes up. You probably
built your first PC reaching out from mommy's snatch while she slept
during her sixth month. And you just as likely did it in between your
Sandscrit and swimming lessons. Too bad mommy wouldn't frequent
billiard parlors more often back then so you could have covered your
first pool table eleven years earlier.

Ready to try the puzzle challenge Archie - or are you still scared
shitless of not being able to solve it?

You're ugly, your dick is small and everybody fucked your mother.
 
R

Richard Cranium

You are the RETARD that dosn't understand the Kobayashi Maru.

You see, it is you that is in an un-winnable situation.


Then please explain why everyone successfully dumps on you whenever
they feel like playing you. It is you who must accept the no-win
scenario because you are a no-win personality. Everthing you have
ever done, are doing, or will ever do, is useless. If you were to be
vaporized right now, no one would notice that you were gone - save for
a few freaks and a couple of goats - and the goats would be better
off.
 
J

jmfbahciv

John said:
Well, we've never had to do a full restore.

You have a bug in your procedures, then.

I do occasionally dig into
The Cave and pull an old CD or DVD backup, like when I'm working at
home and want to see an older design, or the rare occasion when a file
is missing or otherwise wrong in the M:\LIB directory. The files are
always there, no problems so far.

They're just files burned into DVDs. Not a lot can go wrong. Doing
zero-based weekly backups onto write-once media gives us a lot of
redundancy.

Only if you have all the files you need. How do you detect that a
file has disappeared?

/BAH
 
Top