Is seti off-line, none of my units from last night have been sent, and I can't get any more?
Announcement
Collapse
No announcement yet.
seti off-line
Collapse
X
-
From the Technical News Reports page:
July 12th, 2000
We are fixing some bugs with the Informix database today, so the data server will be down until these are fixed.
BTW: the Matrox Users' stats on seti.matroxusers.com aren't updated either: same reason.
Martin
Comment
-
Damnit, as soon as I start crunching again servers f*ck me up. Still no seti crumching at home, no phone line still. Got an iPAQ at work, 500 MHz coppermine running win2kpro. Does a WU in about 8.25 hrs. Using another similar machine that I'm testing, but just for a short time[size=1]D3/\/7YCR4CK3R
Ryzen: Asrock B450M Pro4, Ryzen 5 2600, 16GB G-Skill Ripjaws V Series DDR4 PC4-25600 RAM, 1TB Seagate SATA HD, 256GB myDigital PCIEx4 M.2 SSD, Samsung LI24T350FHNXZA 24" HDMI LED monitor, Klipsch Promedia 4.2 400, Win11
Home: M1 Mac Mini 8GB 256GB
Surgery: HP Stream 200-010 Mini Desktop,Intel Celeron 2957U Processor, 6 GB RAM, ADATA 128 GB SSD, Win 10 home ver 22H2
Frontdesk: Beelink T4 8GB
Comment
-
Server problems my butt! All of our friends from Berkeley were across the Bay in Golden Gate Park, trying out for "Survivor."
http://www.sfgate.com/cgi-bin/article.cgi?file=/examiner/hotnews/stories/13/survivor.d tl
That guy tearing off his shirt is a UC-Berkeley police officer, by the way. God, this is a classy place.
Paul
paulcs@flashcom.net
[This message has been edited by paulcs (edited 14 July 2000).]
Comment
-
Here's explanation from Eric J. Korpela @ Berkeley for those who wants to know:
"Because of the limitations of informix (2 GB chunks) and the limitations
of Solaris (on partitions per disk) we had been limited to using 9 Gb
drives for the science database, and were rapidly approaching the number
of disks our controllers could handle. As a work around we were investigating
using Veritas to get by these limitations (which would allow us to use
18 Gb drives, in effect doubling our disk space).
Before setting out to migrate the drives over to the new system we decided
performing tests to make sure it would work. The information and advice
we had was to create a separate database space on the new drives, so a
failure wouldn't affect the existing database. Well, it turns out one of
the tests did fail several days ago, but the database continued to operate
just fine, as was predicted. Due to the failure the root chunk of the
new database space was corrupted. This wasn't a problem until we restarted
the database machine to bring a new tape drive on-line. After the reboot,
informix complained that it couldn't access the corrupted chunk and wouldn't
allow inserts into any database, including those unrelated to the missing
chunk. We couldn't remove the bad chunk because it was corrupted,
Informix couldn't fix the bad chuck because it was too corrupted. We couldn't
restore the bad chunk from a backup because no backup of the bad chunk
existed. So we were stuck with a database that was readable, but not
writeable.
We eventually came to the concusion that the only way to get back up in
any reasonable amount of time was to restore the database to the new 18 Gb
disks using the last full backup (which took a bit more than 12 hours, we
should thank Matt and Jeff for getting up in the middle of the night to
change tapes). That's done, and we're getting to the point where we'll
have enough usable work units to restart in a few minutes.
User stats shouldn't be affected, but science that are more recent than
the last full backup won't be in the new database. It'll look like we've
gone backwards on the graphs page. We've still got the missing science
on the old 9 Gb drives, but it'll take time and lots of informix tech support
to get it into the new database.
We've also got the problem that results will be coming back that don't
match workunits in the database. I'm going to be stashing these until
I figure out what to do with them. They'll probably have to sit on disk
until we get the database merged again.
Well, I'm off to start the server now.
Eric
--
Eric Korpela korpela@ssl.berkeley.edu
"
Comment
Comment