Persistence

When a C program exits, all of its global variables, local variables, and heap-allocated blocks are lost. Its memory is reclaimed by the operating system, erased, and handed out to other programs. So what happens if you want to keep data around for later?

To make this problem concrete, let’s suppose we want to keep track of a hit counter for web pages. From time to time, the user will run the command count_hit number where number is an integer value in the range 0 to 99, say. (A real application would probably be using urls, but let’s keep things as simple as possible.) We want count_hit to print the number of times the page with the given number has been hit, i.e. 1 the first time it is called, 2 the next time, etc. Where can we store the counts so that they will survive to the next execution of count_hit?

Simple solution using text files

The simplest solution is probably to store the data in a text file. Here’s a program that reads a file hits, increments the appropriate value, and the writes out a new version. To reduce the chances that data is lost (say if count_hit blows up halfway through writing the file), the new values are written to a new file hit~, which is then renamed to hit, taking the place of the previous version.

#include <stdio.h>
#include <stdlib.h>

#define NUM_COUNTERS (100)      /* number of counters we keep track of */
#define COUNTER_FILE "/tmp/hit" /* where they are stored */
#define NEW_COUNTER_FILE COUNTER_FILE "~"  /* note use of constant string concatenation */

int
main(int argc, char **argv)
{
    int c;
    int i;
    int counts[NUM_COUNTERS];
    FILE *f;

    if(argc < 2) {
        fprintf(stderr, "Usage: %s number\n", argv[0]);
        exit(1);
    }
    /* else */

    c = atoi(argv[1]);
    if(c < 0 || c >= NUM_COUNTERS) {
        fprintf(stderr, "Counter %d not in range 0..%d\n", c, NUM_COUNTERS - 1);
        exit(2);
    }

    f = fopen(COUNTER_FILE, "r");
    if(f == 0) {
        perror(COUNTER_FILE);
        exit(3);
    }

    /* read them in */
    for(i = 0; i < NUM_COUNTERS; i++) {
        fscanf(f, "%d", &counts[i]);
    }
    fclose(f);

    printf("%d\n", ++counts[c]);

    /* write them back */
    f = fopen(NEW_COUNTER_FILE, "w");
    for(i = 0; i < NUM_COUNTERS; i++) {
        fprintf(f, "%d\n", counts[i]);
    }
    fclose(f);

    rename(NEW_COUNTER_FILE, COUNTER_FILE);

    return 0;
}

examples/persistence/textFile.c

If you want to use this, you will need to create an initial file /tmp/hit with NUM_COUNTERS zeroes in it.

Using a simple text file like this is the easiest way to keep data around, since you can look at the file with a text editor or other tools if you want to do things to it. But it means that the program has to parse the file every time it runs. We can speed things up a little bit (and simplify the code) by storing the values in binary.

Using a binary file

Here’s a version that stores the data as a binary file of exactly sizeof(int) * NUM_COUNTERS bytes. It uses the stdio routines fread and fwrite to read and write the file. These are much faster than the loops in the previous program, since they can just slap the bytes directly into counts without processing them at all.

The program also supplies and extra flag b to fopen. This is ignored on Unix-like machines but is needed on Windows machines to tell the operating system that the file contains binary data (such files are stored differently from text files on Windows).

#include <stdio.h>
#include <stdlib.h>

#define NUM_COUNTERS (100)      /* number of counters we keep track of */
#define COUNTER_FILE "/tmp/hit" /* where they are stored */
#define NEW_COUNTER_FILE COUNTER_FILE "~"  /* note use of constant string concatenation */

int
main(int argc, char **argv)
{
    int c;
    int counts[NUM_COUNTERS];
    FILE *f;

    if(argc < 2) {
        fprintf(stderr, "Usage: %s number\n", argv[0]);
        exit(1);
    }
    /* else */

    c = atoi(argv[1]);
    if(c < 0 || c >= NUM_COUNTERS) {
        fprintf(stderr, "Counter %d not in range 0..%d\n", c, NUM_COUNTERS - 1);
        exit(2);
    }

    f = fopen(COUNTER_FILE, "rb");
    if(f == 0) {
        perror(COUNTER_FILE);
        exit(3);
    }

    /* read them in */
    fread(counts, sizeof(*counts), NUM_COUNTERS, f);
    fclose(f);

    printf("%d\n", ++counts[c]);

    /* write them back */
    f = fopen(NEW_COUNTER_FILE, "wb");
    fwrite(counts, sizeof(*counts), NUM_COUNTERS, f);
    fclose(f);

    rename(NEW_COUNTER_FILE, COUNTER_FILE);

    return 0;
}

examples/persistence/binaryFile.c

Again, you’ll have to initialize /tmp/hit to use this; in this case, you want it to contain exactly 400 null characters. On a Linux machine you can do this with the command dd if=/dev/zero of=/tmp/hit bs=400 count=1.

The advantage of using binary files is that reading and writing them is both simpler and faster. The disadvantages are (a) you can’t look at or update the binary data with your favorite text editor any more, and (b) the file may no longer be portable from one machine to another, if the different machines have different endianness or different values of sizeof(int). The second problem we can deal with by converting the data to a standard word size and byte order before storing it, but then we lose some advantages of speed.

A version that updates the file in place

We still may run into speed problems if NUM_COUNTERS is huge. The next program avoids rewriting the entire file just to update one value inside it. This program uses the fseek function to position the cursor inside the file. It opens the file using the "r+b" flag to fopen, which means to open an existing binary file for reading and writing.

#include <stdio.h>
#include <stdlib.h>

#define NUM_COUNTERS (100)      /* number of counters we keep track of */
#define COUNTER_FILE "/tmp/hit" /* where they are stored */

int
main(int argc, char **argv)
{
    int c;
    int count;
    FILE *f;

    if(argc < 2) {
        fprintf(stderr, "Usage: %s number\n", argv[0]);
        exit(1);
    }
    /* else */

    c = atoi(argv[1]);
    if(c < 0 || c >= NUM_COUNTERS) {
        fprintf(stderr, "Counter %d not in range 0..%d\n", c, NUM_COUNTERS - 1);
        exit(2);
    }

    f = fopen(COUNTER_FILE, "r+b");
    if(f == 0) {
        perror(COUNTER_FILE);
        exit(3);
    }

    /* read counter */
    fseek(f, sizeof(int) * c, SEEK_SET);
    fread(&count, sizeof(int), 1, f);

    printf("%d\n", ++count);

    /* write it back */
    fseek(f, sizeof(int) * c, SEEK_SET);
    fwrite(&count, sizeof(int), 1, f);
    fclose(f);

    return 0;
}

examples/persistence/binaryFileFseek.c

Note that this program is not only shorter than the last one, but it also avoids allocating the counts array. It also is less likely to run into trouble with running out of space during writing. If we ignore issues of concurrency, this is the best we can probably do with just stdio.

An even better version using mmap

We can do even better using the mmap routine, available in all POSIX-compliant C libraries. POSIX, which is short for Portable Standard Unix, is supported by essentially all Unix-like operating systems and NT-based versions of Microsoft Windows. The mmap routine tells the operating system to “map” a file in the filesystem to a region in the process’s address space. Reading bytes from this region will read from the file; writing bytes to this region will write to the file (although perhaps not immediately). Even better, if more than one process calls mmap on the same file at once, they will share the memory region, so that updates made by one process will be seen immediately by the others (with some caveats having to do with how concurrent access to memory actually works on real machines).

Here is the program using mmap:

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/mman.h>   /* For mmap.  I think mman is short for "memory management." */

#define NUM_COUNTERS (100)      /* number of counters we keep track of */
#define COUNTER_FILE "/tmp/hit" /* where they are stored */
#define NEW_COUNTER_FILE COUNTER_FILE "~"  /* note use of constant string concatenation */

int
main(int argc, char **argv)
{
    int c;
    int *counts;
    int fd;

    if(argc < 2) {
        fprintf(stderr, "Usage: %s number\n", argv[0]);
        exit(1);
    }
    /* else */

    c = atoi(argv[1]);
    if(c < 0 || c >= NUM_COUNTERS) {
        fprintf(stderr, "Counter %d not in range 0..%d\n", c, NUM_COUNTERS - 1);
        exit(2);
    }

    /* open and map the file */
    fd = open(COUNTER_FILE, O_RDWR);
    if(fd < 0) {
        perror(COUNTER_FILE);
        exit(3);
    }
    counts = mmap(0, sizeof(*counts) * NUM_COUNTERS, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);

    if(counts == 0) {
        perror(COUNTER_FILE);
        exit(4);
    }

    printf("%d\n", ++counts[c]);

    /* unmap the region and close the file just to be safe */
    munmap(counts, sizeof(*counts) * NUM_COUNTERS);
    close(fd);

    return 0;
}

examples/persistence/binaryFileMmap.c

Now the code for actually incrementing counts[c] and writing it to the file is trivial. Unfortunately, we have left stdio behind, and have to deal with low-level POSIX calls like open and close to get at the file. Still, this may be the most efficient version we can do, and becomes even better if we plan to do many updates to the same file, since we can just keep the file open.

Concurrency and fault-tolerance issues: ACIDity

All of the solutions described so far can fail if you run two copies of count_hits simultaneously. The mmap solution is probably the least vulnerable to failures, as the worst that can happen is that some update is lost if the same locations is updated at exactly the same time. The other solutions can fail more spectacularly; simultaneous writes to /tmp/hit~ in the simple text file version, for example, can produce a wide variety of forms of file corruption. For a simple web page hit counter, this may not be a problem. If you are writing a back-end for a bank, you probably want something less vulnerable.

Database writers aim for a property called ACIDity from the acronym ACID = Atomicity, Consistency, Isolation, and Durability. These are defined for a system in which the database is accessed via transactions consisting of one or more operations. An example of a transaction might be ++counts[c], which we can think of as consisting of two operations: reading counts[c], and writing back counts[c]+1.

Atomicity means that either every operation in a transaction is performed or none is. In practice, this means if the transaction fails any partial progress must be undone.

Consistency means that at the end of a transaction the database is in a “consistent” state. This may just mean that no data has been corrupted (e.g. in the text data file we have exactly 100 lines and they’re all integer counts), or it may also extend to integrity constraints enforce by the database (e.g. in a database of airline flights, the fact that flight 2937 lands at HVN at 22:34 on 12/17 implies that flight 2937 exists, has an assigned pilot, etc.).

Isolation says that two concurrent transactions can’t detect each other; the partial progress of one transaction is not visible to others until the transaction commits.

Durability means that the results of any committed transaction are permanent. In practice this means there is enough information physically written to a disk to reconstruct the transaction before the transaction finished.

How can we enforce these requirements for our hit counter? Atomicity is not hard: if I stop a transaction after a read but before the write, no one will be the wiser (although there is a possible problem if only half of my write succeeds). Consistency is enforced by the fseek and mmap solutions, since they can’t change the structure of the file. Isolation is not provided by any of our solutions, and would require some sort of locking (e.g. using flock) to make sure that only one program uses the file at a time. Durability is enforced by not having count_hits return until the fclose or close operation has succeeded (although full durability would require running fsync or msync to actually guarantee data was written to disk).

Though it would be possible to provide full ACIDity with enough work, this is a situation where using an existing well-debugged tool beats writing our own. Depending on what we are allowed to do to the machine our program is running on, we have many options for getting much better handling of concurrency. Some standard tools we could use are:

  • gdbm. This is a minimal hash-table-on-disk library that uses simplistic locking to get isolation. The advantage of this system is that it’s probably already installed on any Linux machine. The disadvantage is that it doesn’t provide much functionality beyond basic transactions.
  • Berkeley DB is a fancier hash-table-on-disk library that provides full ACIDity but not much else. There is a good chance that some version of this is also installed by default on any Linux or BSD machine you run into.
  • Various toy databases like SQLite or MySQL provide tools that look very much like serious databases with easy installation and little overhead. These are probably the solutions most people choose, especially since MySQL is integrated tightly with PHP and other Web-based scripting languages. Such a solution also allows other programs to access the table without having to know a lot of details about how it is stored, because the SQL query language hides the underlying storage format.
  • Production-quality databases like PostgreSQL, SQL Server, or Oracle provide very high levels of robustness and concurrency at the cost of requiring non-trivial management and possibly large licensing fees. This is what you pick if you really are running a bank.

Licenses and Attributions


Speak Your Mind

-->