Skip to content

Sprite

Historical Context

Professor Voelker worked on Sprite at UC Berkeley when he was an undergrad (very very long time ago)

Three technology trends:

  1. Networks →

    • More difficult system administration
    • More time sharing →
      • We want a single global namespace for files
    • Idle machines → We want to enable process migration // Ask about this
  2. Larger memories →

    • Caching
  3. Multiprocessors →

    • Sharing
    • Parallelism
    • OS has to be multiprocessor aware → we need better fine-grain locks for mutual exclusion protection

Goals

Implementation

Sprite does client caching as memory resource is more available → We might have same file cached on multiple machines → raises consistency issue

Sharing: Concurrent vs Consecutive

Concurrent Sequential
Description Two or more users use the same file on two or more different machines A user on a machine opens a file and closes it, then a user on another machine opens the file and uses it → sharing over time, but not sharing at the same time
Solution Disable caching on client if it is concurrent sharing. Caching will be done on the server Use version numbers: client checks it caches the latest version of the file by checking with the server

Authors' arguemnt regarding sharing

The authors argue that sharing is very uncommon after conducting research on common computation tasks. The concurrent sharing is even rarer than sequential sharing which makes the overhead for cahcing on the server insignificant to overall user experience

Takeaway

Back to top