Juncheng's blog

Learn something about everything, learn everything about something

Implementing FIFO queues without locks

Since the debut of S3-FIFO, many people have become very interested in implementing the new eviction algorithm. However, there have been several discussions where concerns about scalability are raised. In this post, I will discuss how to implement FIFO and S3-FIFO without using any lock.

Juncheng Yang

FIFO queues are all you need for cache eviction

More information can be found at https://s3fifo.com

Juncheng Yang

Running distributed computation on Cloudlab

Background I like measurement and analysis, and I run a lot of trace analysis in the past. For example, I find that FIFO-Reinsertion has a lower miss ratio than LRU for cache eviction, simple FIFO-based algorithms can be more efficient and effective than state-of-the-art algorithms.

Juncheng Yang

FIFO is Better than LRU: the Power of Lazy Promotion and Quick Demotion

TL;DR Historically FIFO-based algorithms are thought to be less efficient (having higher miss ratios) than LRU-based algorithms. In this blog, we introduce two techniques, lazy promotion, which promotes objects only at eviction time, and quick demotion, which removes most new objects quickly. We will show that Conventional-wisdom-suggested “weak LRUs”, e.g., FIFO-Reinsertion, is actually more efficient (having lower miss ratios) than LRU; Simply evicting most new objects can improve state-of-the-art algorithm’s efficiency. Eviction algorithms can be designed like building LEGOs by adding lazy promotion and quick demotion on top of FIFO.

Juncheng Yang

Comparing cluster file systems MooseFS, BeeGFS, and Ceph

TL;DR This blog posts talk about my experience setting up MooseFS, BeeGFS and Ceph on Cloudlab.

Juncheng Yang

Segcache: a segment-structured cache for low miss ratio and high scalability

TL;DR Segcache is a segment-structured cache storage that provides high memory efficiency (low miss ratio), high throughput, and scalability, particularly for workloads that contain small objects and use TTLs (time-to-live). As a spoiler, in our benchmark, Segcache can reduce the memory footprint of Twitter’s largest cache cluster by up to 60% compared to the slab-based storage.

Juncheng Yang

Test blog

hi this is a test

Juncheng Yang