The Problem: Ad agency hosted all media relating to its projects in a mixture of drives with non-sequential naming, and without backups; leading to issues locating assets, with the occasional irrevocable loss of media.
The Solution: To create a library of hard drives named sequentially, with mirrored backups of equal size, all maintained through a check-in/out system and strict usage rules.
The Details: In 2013, I joined the team for an up-and-coming ad agency with major clients in the cosmetics and fashion industry. Their existing technology infrastructure was a source of constant issues and problems that often got in the way of creative focus. Namely, their storage system at the time consisted of a roster of 2TB RAID0 G-rives; all named after major international cities (e.g. Chicago, Salt Lake City, Paris, etc), without backups, and with no accounting of where things were located aside from the lead editor’s memory.
To exacerbate the issues, in this system a project and its media would be placed on whatever drive was readily available, and if the drive filled up, the next available drive would be used on the project too. This led to situations where a project and its media could be spread across multiple “cities”; or a “city” would have many different projects.
Often times, when memory failed, the post team would have to go through every city (i.e. plug in every drive and explore its contents) to identify if what they needed was inside. Since there were no backups, storage mishaps led to complete loss of media.
This mixture of issues happened often, even more as the company tried to grow and take on bigger projects, which consequently wasted many labor hours.
We began to call these situations “Olympic crises”.
When we began discussing plausible solutions, media servers were not in their immediate budget horizon, so I needed to work with the resources available.
Enter the drive library.
I gathered all the drive resources, bought a handful of 4TB and 6TB drives for the largest projects, and created a drive “library” shelf with a checking in/out system, and established a few rules for its use.
Each drive followed a three-digit sequential number and was paired with a mirrored copy, which would be reconciled at the end of each day.
Any one project could only exist in one paired drive (which was manageable as none of their media collection for any specific project surpassed 4TB). Furthermore, only projects who had overlapping use of shot media could ever co-exist in the same drive pair.
The whole drive library was kept in an appropriately maintained shelf, organized sequentially.
To keep track of the drives, I set up a google spreadsheet with an easy-to-remember link redirection, and set up a recycled computer next to the library shelf that could only access said spreadsheet.
With the spreadsheet, we manually maintained an inventory of each drive’s sequential number, its pair, what projects were hosted on what drives, the current location of each drive, the last person to use it, and its reconciliation status.
The Results: As with any new system, adoption met with some obstacles and resistance, but soon enough the drive library’s advantages began to be relied on, and the team kept each other accountable on compliance.
The Senior Editor was able to free his time otherwise wasted on fielding questions of media location; and what’s more, no project was ever again irreparably lost.
Though the “drive library” can be considered an immature or rudimentary set-up, it is definitely a good first step on the road to more complex/pricier solutions. Such was the case for this team, as this drive library paved the way for a more efficient team, that was able to expand their creative work and handle more/bigger projects.