Computer Systems

Something about computer system.

SIEVE: Cache eviction can be simple, effective, and scalable

TL;DR Caching bolsters the performance of virtually every computer system today by speeding up data access and reducing data movement. By storing frequently accessed objects on a small but comparatively fast storage device, future requests for that cached data can be processed rapidly. When the capacity of a cache is much smaller than the complete dataset, choosing what objects to store in the cache, and which to evict, becomes an important, hard, and fascinating problem.

Juncheng Yang

Implementing FIFO queues without locks

Since the debut of S3-FIFO, many people have become very interested in implementing the new eviction algorithm. However, there have been several discussions where concerns about scalability are raised. In this post, I will discuss how to implement FIFO and S3-FIFO without using any lock.

Juncheng Yang

FIFO queues are all you need for cache eviction

More information can be found at https://s3fifo.com

Juncheng Yang

FIFO is Better than LRU: the Power of Lazy Promotion and Quick Demotion

TL;DR Historically FIFO-based algorithms are thought to be less efficient (having higher miss ratios) than LRU-based algorithms. In this blog, we introduce two techniques, lazy promotion, which promotes objects only at eviction time, and quick demotion, which removes most new objects quickly. We will show that Conventional-wisdom-suggested “weak LRUs”, e.g., FIFO-Reinsertion, is actually more efficient (having lower miss ratios) than LRU; Simply evicting most new objects can improve state-of-the-art algorithm’s efficiency. Eviction algorithms can be designed like building LEGOs by adding lazy promotion and quick demotion on top of FIFO.

Juncheng Yang

Comparing cluster file systems MooseFS, BeeGFS, and Ceph

TL;DR This blog posts talk about my experience setting up MooseFS, BeeGFS and Ceph on Cloudlab.

Juncheng Yang

Segcache: a segment-structured cache for low miss ratio and high scalability

TL;DR Segcache is a segment-structured cache storage that provides high memory efficiency (low miss ratio), high throughput, and scalability, particularly for workloads that contain small objects and use TTLs (time-to-live). As a spoiler, in our benchmark, Segcache can reduce the memory footprint of Twitter’s largest cache cluster by up to 60% compared to the slab-based storage.

Juncheng Yang

Test blog

hi this is a test

Juncheng Yang