[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Backup to file server



    Date: Wed, 4 Mar 1992 17:32 EST
    (Soapbox: Now that Unix file system is basically becoming the standard,
    why don't they incorporate version numbers into it?)

An answer, and more soapboxing 

The argument I have gotten from the unix types is that versioning file systems
waste disk space, "Also it makes the Inodes bigger, and its only a VMS thing
anyway". I point out the number of times that they have saved my or a close
friends nether regions, and it sails past. - when I tell them about soft delete,
they lack the cultural capacity to understand what is meant, let alone why it is
desirable.

The unix types I have talked to still have the overloaded university timesharing
mentality - editors are praised for computational rather than human efficiency,
(that and the baby duck syndrome, I can't see any other reason why VI lives),
disk capacity is considered more expensive the data that lives on it (in their
minds, the data got there for "free").  They haven't made the connection that
people time is what we should be optimizing for, and choosing an editor based on
the fact that it uses 1% rather than 5% of the cpu is wrongheaded. (especially
given that most of the time, a single copy of the editor will be the only
program running on a given cpu)

I see a similar thing with micro's - I was praising a user for backing up his
McToaster, then watching with horror as he tried to determine which of his two
tapes was the older, or noticing (again with horror) that the central sun
fileserver has only 5 daily incremental tapes, and only 3 weekly consolidateds -
the unix sort didn't understand that they would loose big if any single tape
proved bad.

FTRecord: the backup schedule on the 'bolix servers is daily incrementals, once
weekly completes (important: the complete does not mark the files as backed up,
to make sure that any files created between the finish of an incremental, and
the start of the complete make it onto the next incremental, so all files are on
at least 2 tapes), every other complete sent offsite, and there is a 4 week min
before an incremental gets re-used, a 1 year min before a complete gets re-used,
and (I haven't told them this part yet) the completes sent offsite, will remain
there until we no longer have access to a machine that can read the tape (and
for this reason, I have insisted that we keep the 9 track connected to one of
the fileservers, even tho it hasn't been used in two years). 

Note: we use an exabyte plugged into the back of one of our I machines (don't
know if the host is a 1200 or a 400), and with the load balancing (we have three
full time servers) the servers are 2GB each, which means that a full dump
doesn't require a tape change, and can run unattended.  (which is how we can
manage to do 3 completes a week)

I admit that in our biz (chip design) the project cycles are longer than most,
it can be 3 or more years between when the designer first takes mouse to
schematic, and one of the sustaining engineers makes a small change to a metal
mask to improve yields on a production part. 3 weeks ago, we wound up getting a
2 year old backup in, as someone wanted to lift a chunk of circuitry off a 5 year
old part, and the machine readable schematics had gotten "cleaned up" by some
overzealous person. I (for now) have convinced Mgt, that we should budget (given
our current size) $1500/yr to accommodate 1 GB/yr of growth. 

I will check if there are corporate objections to my distributing it, but I do
have a some code to dumb down the interface to backup - the current operator
gets to type a single CP command, it asks him for the tape numbers, auto
schedules the nightly complete if it is that servers turn, keeps logs, sends mail
if something goes wrong, etc.


<dp>