1. local caching of media files being downloaded by users on LAN (gets downloaded from main site in background).
  2. local caching of media being uploaded by users on the LAN (gets sent to main site in background).
  3. local transcoding of uploaded media, so local users can view locally uploaded media before it is uploaded to main site.
  4. remote control and configuration of local servers (perhaps using drupal?), allowing:
  • stats
  • troubleshooting
  • control over what videos are uploaded/downloaded and in what order,
  • patches, upgrades, etc.
  • bandwidth controls
  1. server would be installable from a CD
  2. standardized solid hardware: maybe we should store video files on a large internal drive instead of an external, might be safer.
  3. local cached version of site – slave db sync from master
  4. cached content can be moved via hard drive.

General specs

  • pluggable CDN as the model for file delivery, with URL re-writing at the server
  • pluggable back-end to add new file types and new file delivery methods
  • local server machines are as simple, as plug-and-play as possible. configuration happens on internet-available servers
  • communication between the components using XML-RPC
  • use drupal/php for all coding
  • code to drupal standards for formatting, documentation, etc.


  1. Central Server (CS) - keeps track of all the files and their locations and is admin interface.
  • Drupal for authentication, XML-RPC and user-interface, minimal footprint
  • “Caching Server” module with custom code.
  • Nodes on Central Server uniquely identity every file that can be potentially cached (by a combination of filepath, type and option) and an array of delivery methods:
    • Local Servers, Qiniq QFile, default amazon CloudFront delivery, etc...
    • array (‘backend’ => ‘url’)) where backend can be ‘local_server’ or ‘amazon’ (for now)
  • Users as back-end clients (individual local servers) and Roles as different types of back-ends (local server, Qfile). Also roles/users for human admins.
  • NodeQueue to track files (as nodes) as they are synchronized with various back-ends (except amazon default, which is uploaded from Site Server and is the base for all other back-ends) and base for user interface to control sync, etc.
  • provide url information for files to SS for url re-writing. Determines which back-end to use, based on requesting computer IP (maybe other factors? leave open for future).
  1. “Caching Client” module to be installed on Site Server (SS) – communicates with Central Server.
  • requests url information for files
  • updates CS with latest uploaded files (as MM action)
  • implements two-stage file uploading (for video, maybe other filetypes in the future)
  1. Local Servers (LS) – simple black box, serve files locally and communicates with Central Server
  • transcodes locally uploaded files
  • creates static version of site
  • solid build, small boxes
  • large as possible internal hard drive (2TB? 4TB?)
  • multiple ethernet?
  • e-sata or other high-speed drive connection
  • Debian stable
  • Apache, PHP5
  • scripts for receiving files, communicating with CS, transcoding, etc.
  • scripts for static cache of site
  • automount attached e-sata drive
  • easily deployable to local servers (could be CD, other)

Functional outline – local servers example

1. When a Local Server is plugged in and has internet access, contacts the Central Server with a token generated for local servers (all such contact is made using XML-RPC): cs.newcache($token, ‘local_server’) The CS creates a user account for the LS, assigns it to the LS role, and returns to the LS its auth credentials (username and password combo). The CS creates a Download node queue, an Upload node queue, and a Files node queue for the LS. The Download node queue tracks all the files to be downloaded to the LS and is filled with all existing active File nodes. The Upload node queue tracks the files being uploaded to the Site Server from the LS. The Files node queue tracks the files that can be streamed from the LS. These queues can all be manipulated by logged-in admins.

2. On a regular cron cycle (ex. 5 minutes) the LS contacts the CS and sends its auth and LAN and WAN IP: cs.location($auth, LAN_IP, WAN_IP) The CS stores (or updates) these with the LS username, a unique ID and the current timestamp in a “Local Servers” table. The CS removes any stale combos. (When a regular user contacts the Site Server, their WAN IP is used to find the LS in their LAN and URLs are rewritten to this LS.)

3. When a file of a particular type – video SD, video HD, images at various sizes, audio – is uploaded to the Site Server, a node of type “File” is created on the CS which stores the unique files table filepath from the SS and its type (eg. video) and option (eg. high or low). This node is added to the bottom of all Download node queues (unless already present in the Files queue, due to LS conversion). cs.announce($auth, $filename, $type, $option)

5. A LS contacts the CS and is given a url for the file at the top of its Download queue. cs.download($auth, ‘get’) When it has downloaded the file it contacts the CS, the node is added to its Files queue. cs.download($auth, ‘put’, $filename, $type, $option)

6. When a computer contacts the SS its WAN IP is searched for in the “Local Servers” table on the CS. cs.setBackend($auth, $WAN_ip, $session) If found, the LAN IP or IPs are returned to the browser with a JavaScript which looks for the Local Server at the IPs. If found, the CS is alerted and the LS IP and unique ID is stored in the browser’s Session. cs.setBackend($auth, $WAN_ip, $session, $local_server) It will be removed from the Session if it is stale-dated in step 2.

7. Download behaviour – when computers on LS networks request files: If requested files (video, audio, images, etc.) are present in the Files node queue for the local server, then the URL to the file is rewritten using the local IP and sent to the SS for page generation. If there is no local server or the file is not in the Files queue, then the URL may be re-written in other ways (as an RTMP stream from CloudFront, for example.) cs.getURL($auth, $filename, $type, $option, $session)

8. Upload behaviour – when computers on LS networks upload (video) files: Uploading (creating content) will be a two-stage process. A node creation form creates the content node on the SS, and then a second step is returned to the user for uploading the file. The file will be uploaded to the LS, and these steps take place:

  1. LS contacts SS (or upload script) with name of file and NID (perhaps other form data, like token) and file is created:
  • dummy file in files directory, to avoid name collisions
  • entry in content field and files table
  • somehow Media mover harvesting is inhibited.
  1. LS adds file to Upload node queue for LS cs.upload($auth, ‘put’, $filename, ‘video’, ‘high’)
  2. when file is uploaded it replaces dummy file and media mover harvesting is enabled. File conversion proceeds as normal.
  3. LS converts video file to hi-res formats. LS contacts CS which adds video to Files queue for the LS (will not be downloaded to this LS when converted on SS). cs.download($auth, 'put', $filename, 'video', 'high') This upload process will be implemented for video but may be adapted to other content (audio, images) if necessary.
  1. LS periodically crawls SS to create a static version of the site – urls to media files are the locally cached versions.
  2. Admin accounts can be created on CS allowing admins to alter queue order and remove files from queues. Also control bandwidth allocation for upload and download.

11. Specially configured hard drives can be connected to LS through high-speed (e-sata) connections. LS will automatically mount hard drive and process can be monitored from admin interface. New download files can be copied to LS and will be removed from Download queue and added to Files queue. New upload files can be copied from LS to hard drive and be removed Upload queue (and added centrally when hard drive returns to main office).

Hardware possibilities


There are many different classes of system to choose from, depending on exactly how bad an environment these things have to withstand. Predictably, there is a direct relationship between price and reliability. Systems listed below are from least to most reliable. This is not an exhaustive list of systems or candidates, but will be altered based on feedback from the client until we know exactly what kind of system they need.


The Acer Aspire Easystore H340 is a home storage server that normally comes with Windows Home Server. Linux also runs on it. If we add 2x2TB hard drives, we have a 5TB system for about $710.

Pros: Cheap Simple Low power Front panel lights

Cons: Requires adapter for keyboard/mouse/video No linux drivers for front panel lights M$ tax Fans may inhale too much dust Processor may not be powerful enough for transcoding Swapping out the server for a HP MediaServer would give us a much more powerful (dual-core pentium) system for $1010 total. This removes the problems with transcoding, but all other pros/cons remain the same.


We could build a custom system for about $1000 around a case like this: http://ncix.com/products/?sku=37520&vpn=CSE-733TQ-645B&manufacture=SuperMicro

Pros: Cheap Commodity parts means spares are easy to find

Cons: Time-consuming to put together systems from scratch rather than buy pre-made Fans may inhale too much dust

We could also custom-order a server from a big-name manufacturer: A Dell PowerEdge 110 with 4TB would cost $2,769. An HP ProLiant 310 g5 with 4TB would cost $2,953.00

Pros: No assembly required Excellent support

Cons: More expensive Non-commodity parts Fans may inhale too much dust


This ARK-3440 system by Advantech is fanless and extremely rugged, and so would hold up very well. No price is listed, but judging by similar systems, it probably costs around $2000.

The slightly older ARK-3420-S and ARK-3420-S1 have Celeron and Core 2 Duo processors and are and are $1080 and $1550, respectively.

Pros: Fanless - no danger of dust or smoke damage Robust

Cons: Expensive Limited storage space (2x2.5” laptop drives). This gives us a max of 2TB of HDD or 1TB of flash. Non-commodity parts Edit Milspec God knows how much these cost. Manufacturers don’t publish prices.

They also don’t generally offer dust-proof systems with a lot of storage. Most sealed systems have 1 or 2 2.5” bays, like the above industrial system.


For machines that don’t have a front panel display, or have one that’s not accessible from Linux, we can use something like The CW1602 This is a 16x2 LCD with 6 buttons and is supported on Linux by LCDproc or lcd4linux software.

Auto-mounting Hotplugged Devices


udev is a user-mode system that triggers on device insertion or removal, and can be used to change device names in /dev or perform other, more complicated tasks via shell scripts.

Scripts are triggered by matching rules .

The following rule should match any USB drive plugged into the system and run a script to automount it: KERNEL==”sd*”, DRIVERS==”usb-storage”

This rule matches any SCSI drive: KERNEL==”sd*”, DRIVERS==”sd”

This matches an IDE drive: KERNEL==”hd*”, DRIVERS==”ide-disk”

Annoyingly, these don’t match on remove, only on insertion. Removing the DRIVERS== clause will make them match on both.

So, we use these rules to match on insertion and removal of any USB or SCSI disk:

ACTION=="remove", KERNEL=="sd*" RUN+="/home/debian/driveremoved.sh $kernel"
ACTION=="add", KERNEL=="sd*" RUN+="/home/debian/driveadded.sh $kernel"

This is the driveadded.sh script that automounts the drives and copies files off them: source:automount/driveadded.sh

And this is the driveremoved.sh script that unmounts the drive when unplugged, then deletes the mountpoint: source:automount/driveremoved.sh

Synchronization scripts

This is an outline of the scripts that will synchronize files between local servers and the site server (in pseudocode). They run periodically on each local server, launched by cron, at intervals of 5 minutes.

If this causes too much load on the server, it would be a good idea to stagger the cron jobs so they start at different times, or put a random delay at the start of the script.

Stale lockfiles will be deleted from within the same script (rather than through a cron job as has been considered earlier).

Upload script

if there is a lockfile in /var/run:
if it is stale, delete it
else exit
create lockfile in /var/run to indicate upload is running
Get name of file at head of upload queue from cs.upload xmlrpc
if there is no file to download:
delete lockfile
call rsync to copy the file to the site server
if transfer succeeded:
do nothing
else if transfer failed:
re-add file to end of upload queue using cs.upload

Download script

if there is a lockfile in /var/run:
if it is stale, delete it
else exit
create a lockfile in /var/run to indicate download is running
Get name of next file to download from cs.download xmlrpc
if there is no file to download:
delete lockfile
call rsync to copy the file from the site server
if transfer succeeded:
do nothing
if transfer failed:
re-add file to end of download queue using cs.download

Test scripts


The Local Server software is bundled with a series of test scripts which wrap the XML-RPC communication functions between the local server and the central server. A comprehensive runthrough of all test scripts is outlined below. This series of tests can be run using the testall script located in the local_servers/sync directory.

Testing process

The individual test scripts may be called individually to test a single function if desired. They are:

  • addfile.php - Creates a new file node on the Central Server and adds it to all local server download queues.
  • addtoqueue.php - Adds an existing file node to a queue. It is an error if the node does not exist.
  • createfilenode.php - Create a new file node, suitable for using in addtoqueue or similar.
  • getamazonurl.php - Given the filename, type and options of a file, get the URL for its location on Amazon S3.
  • getconfig.php - Retrieve the server’s recordedconfig info from the server, including LAN/WAN IPs and bandwidth limits.
  • getfullqueue.php - Get all published and unpublished files in the named queue.
  • getqueue.php - Get all published files in the named queue.
  • getunpublished.php - Get only unpublished files in the named queue.
  • isinqueue.php - Tests whether or not a named file is in a queue.
  • movetoqueue.php - Move a file from one queue to another.
  • removefromqueue.php - Remove a file from a queue and delete its file node.
  • reservefilename.php - Obtain a name for a new file upload that does not conflict with other files already present on the server.
  • testreplace.php - Given a filename and an extension, create a new filename with the given extension on the end.

In addition the following production scripts are in the same directory:

  • deleteunpublished.php
  • download.php
  • setconfig.php
  • transcode.php
  • upload.php

How to set up kiosk mode

  1. Configure firefox and init scripts according to these instructions: http://jadoba.net/kiosks/firefox/

  2. Add further config changes suggested in description and comments here: https://addons.mozilla.org/en-US/firefox/addon/1659/

  3. Disable screen blanking and power saving by adding the following to ~/.xinitrc:

    xset s 0 0
    xset -dpms
  4. Comment out the startkiosk.sh line in /etc/rc.local

  5. Add the following lines to /usr/local/bin/startkiosk.sh right above the “su” line:

    echo "Press enter to start kiosk mode."
  6. Add this at the end of /etc/inittab:

    7:23:respawn:/sbin/getty -n -l /usr/local/bin/startkiosk.sh 38400 tty1

I highly recommend not using these firefox “kiosk mode” instructions without more research and testing. In practice we found this setup very unstable. The much simpler setup currently described in the Setup Guide (plain xorg and iceweasel with flash player) works much better. Even better might be the standalone flash player set to the required playlist, with no browser at all... John