Results 1 to 2 of 2

Thread: A 10 TB Personal Cloud (Network Archiving System) for $350

  1. #1
    Senior Member
    Join Date
    Aug 2001
    Location
    Melbourne, FL USA
    Posts
    1,572

    A 10 TB Personal Cloud (Network Archiving System) for $350

    The Biggest cost is in the ability to access it through a network system, so I separated the two cost elements.

    First purchased a 4 TB My Cloud (Network Archiving System) they come in 2, 3, 4, 6, 8 TB sizes and of course ran out of storage. https://www.amazon.com/gp/product/B0...e?ie=UTF8&th=1

    Then purchased a 8 TB USB 3.0 external hard drive they come in sizes between 3 TB and 20 TB. Plugged it into one of the two USB ports of the My Cloud and I now have a 10 TB network archiving system and can add a another or larger hard drive when needed. https://www.amazon.com/gp/product/B0...?ie=UTF8&psc=1

    If you were thinking of buying a Network Archiving System check the prices per terabyte if you configure it as described above.

  2. #2
    Quote Originally Posted by Cris View Post
    The Biggest cost is in the ability to access it through a network system, so I separated the two cost elements.
    (ADapologiesVANCE for the length of what follows. Consider it as an FYI)

    Actually, no, the biggest "cost" (assuming you consider your data to have "value") is ensuring the integrity of your data after it's been stored.

    You'll never know that last year's tax return or those photos of your grandkids have been lost or corrupted until you go to access them, specifically, and discover the drive throws a read error on those file(s). Or, some bit of electronics in the drive gives up the ghost and renders the entire drive inaccessible. E.g., what will you do if your My Cloud dies? Or, if the external hard drive you've tethered to it refuses to spin up??

    [Folks don't think about these things until after they've been bitten -- and then go looking for someone who can recover the data from their now dead device. The same is true of folks who "make backups" of their computers -- but, have never actually tried to RESTORE their computer from one of those backups... how do you know if it will work?? ]

    The "industry standard" approach to this is redundancy -- typically in the form of RAID (Redundant Array of Independent Disks). There, the data is stored in a fashion that makes it possible to recover from (some number and specific types of) media errors -- including total drive failures.

    But, RAID is expensive (not just because it requires more disk space). And slow. And, in the event of a failure, typically has a lengthy recovery process during which any subsequent failure(s) results in permanent data loss! (rebuilding a RAID array can take many DAYS, depending on the type of array and its size).

    And, many RAID implementations won't tell you anything about the integrity of your data store until you specifically go looking for a particular file. I.e., if the RAID had to "compensate" (repair errors detected) for a corrupted previous year tax return that you'd tried to access, what does that say about those photos of your grandkids, on the same disk? Are they intact? Or, is the "tax failure" a sign that there is a degradation in the medium itself (and you'd best try to grab everything you can before an irrecoverable failure).

    Note that this problem isn't confined to "hard disks" but, rather, all storage media. Are you sure those DVDs that you copied your medical records onto a few years ago "for safe keeping" are still readable? Or, might they have suffered "bit rot" as the media ages (depends on the manufacturer of the media, the device used to record them and how they have been stored in the intervening time).

    Like you, I opted to "separated the elements" -- network access, storage media, redundancy and "integrity verification". I have a database that tracks the names of every file in my "network archive". This tells me which medium holds the file, where it resides on that medium (i.e., which "folder"), the last time I verified it's integrity and a "signature" (checksum) that is representative of the file's contents. To verify a file's integrity, I compute the signature of the file as it currently exists and compare it to the signature stored in the database. If I can't read the file (damaged medium?), then it is obviously "broken". If the signature is incorrect, then it has obviously been corrupted (broken).

    And, because I have this information stored in my database for EVERY file, regardless of location, I can use that to locate another copy of the file on some other medium ("Taxes 2015.whatever" also exists on disk #6 in folder "Stuff I saved in 2016/Personal" under the name "Taxes.something" -- note that the name need not be the same!). I then go and find "disk #6" -- which may be a CD, DVD, external USB drive, etc. -- and install it "somewhere" to recover the other copy of the file. And, use this copy to fix the original, corrupted instance of the file.

    RAID would do this automatically -- perform the check, figure out how to recover the original data and then provide it to you as if nothing "bad" had ever happened. But, to do that, it has to have the backup copy "on line" whenever the primary copy is accessed. In my scheme, YOU have to perform those steps -- but, in return, don't have to keep as much data on-line at any given time. AND, you can keep as many copies of the data as you consider prudent (on the same disk, on different disks, on different types of media, etc.)

    You can approximate this sort of approach by ZIPing each individual file. If you can unzip the file, then the file hasn't been corrupted (because the ZIP file contains a signature of its contents which it verifies when you unzip it!). You can keep a duplicate copy of each disk -- same folder names, same (ZIPed!) files, etc. -- so a failure of one file to unzip properly alerts you to retrieve the "backup" copy from the "other" disk. In this way, you needn't keep all of your data "spinning" -- even the stuff that you rarely use.

    E.g., I have about 1TB of music that resides on an external disk. There are other copies of certain portions of that data (albums that I like to listen to often) on other media (e.g., in my phone, on my primary computer, etc.). If I want to update my music selections (get tired of a particular group of albums), I spin up the external disk and grab a copy of whatever I want. If something happens to be corrupted (or, maybe the disk doesn't spin up at all!), then I go looking for the backup copies of those affected files on another disk (or, on the CD's from which they were originally culled).

    The point of all this TL;DR is that you need to consider how you'll address failures in your data store moreso than how to create it in the first place! (you've not known pain until you've lost a 1T, 4T -- or larger -- drive and all of its contents! )

Similar Threads

  1. Replies: 3
    Last Post: 06-12-2017, 08:22 AM
  2. Personal Gait Training System
    By angel7 in forum Exercise & Recovery
    Replies: 33
    Last Post: 11-20-2003, 07:18 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •