Amortized Complexity – a Tool for Graph Algorithms (among others)

There are a lot of very cool problems in computer science that can be solved by using
an appropriate data structure; and the data structures are often easiest to describe in terms
of graphs. And of those data structures, one thing that often comes up is amortized algorithmic complexity. Amortized complexity is something which has been occupying my thoughts lately,
because it’s come up in several real problems, so I’m in the mood to write about it, and it’ll be
useful later.

The idea of amortized complexity is that for some structures, the worst case complexity
cost of a series of operations is different from the worst-case complexity of a single operation. In amortized complexity, you consider cases where some operation is inexpensive most of the time – but to keep it inexpensive most of the time, you need to periodically do something expensive.

As usual, the idea is clearer by example. Suppose you want to have a list of things. You want to be able to access any element by position quickly. You also want to be able to quickly add new things at the end of the list.

If you didn’t want to be able to add things to the end of the list – that is, you knew the size of the list in advance – then it would be easy. You could just use an array, a sequence of values stored in adjacent memory locations. Then accessing an element by position is simple – just take the location of the beginning of the array, and add to it the position of the element you want. Updating is the same thing – just store things in the position. For example, in C, for a list of integers:

typedef int* int_list;
int_list create_list(int size) {
return (int*) malloc(size*sizeof(int));
}
int get_from_list(int_list list, int position) {
return *(list + position);
}
void set_in_list(int_list list, int position, int value) {
*(list + position) = value;
}

But if you want to be able to extend the list, then you can’t just use a simple array. An array gives you fixed length. So what do you do? You pull out the old classic hacking aphorism: “When all else fails, add indirection”. Instead of using an array, you use a structure containing a pointer to an array,
and a length indicator. That way, you can replace the array with a longer one when you need to. Done
naively, you get something like:

typedef struct s_list {
int length;
int *array;
} *list;
list createList(int size) {
list l = (l) malloc(sizeof(struct s_list));
l.length = size;
l.array = (int *)malloc(size * sizeof(int));
}
int get_from_list(list l, int pos) {
return *(l->array + pos);
}
void add_to_list(list l, int val) {
/* allocate a new array, and copy the old volues into it. */
int *new_array = (int *)malloc((size + 1) * sizeof(int));
for (int i=0; i < l->length; i++) {
*(new_array + i) = *(l->array + i);
}
*(new_array + size) = val;
/* replace the old array */
l->array = new_array;
l->length++;
}

Now, this obviously stinks. Every time you add an element to the list, you need
to copy the entire list. So adding an element to a list has complexity O(N) (where N is the size of the list); starting from an empty list and adding N elements is O(N2). That’s not good.

The problem is, how can you do better? If you stop using an array, then you’ll lose the
fast retrieval and update by position. If you keep using an array, and you don’t know its
maximum size, you’re stuck – you’re eventually going to need to copy the array, and copying
the array is O(n) – so the worst case performance of adding to the array is O(n).

The trick is to consider a series of operations. Suppose that you use the following
trick:

  • Keep the real length of the array, and the apparent length of the list separately;
  • Each time you add an element, if the real length of the array is larger than
    the length of the list, add the value in an unused part of the array, and
    add one to the apparent length of the list.
  • If you want to add an element, and the size of the array is the same as the size of the
    list, then re-allocate the array, making it double the size of the old array,
    and then copy the elements to the new array.
typedef struct s_elist {
int list_length;
int array_size;
int *array;
} *elist;
void add_to_elist(elist el, int val) {
if (el->list_length < el->array_size - 1) {
*(el->array + el->list_length) = val;
el->list_length++;
} else {
int new_size = el->array_size * 2;
int *new_array = (int *)malloc(new_size * sizeof(int));
for (int i=0; i < el->list_length; i++) {
*(new_array + i) = *(el->array + i);
}
*(new_array + el->array_size) = val;
el->array = new_array;
el->list_length = new_size;
el->list_length++;
}
}

What’s the complexity of that? Well, worst case is O(N), where N is the current size of the list. But suppose you actually added N elements to the list. The complexity is not
O(N2). It’s just O(N). From the time you reallocate the array from size C to size 2×C,
you get to do C insertions with cost 1; and then you do one insert of cost C. So if you’re looking at the performance of the series, you can amortize the cost over the operations, and say that
the amortized cost of each insert is 2; the amortized cost of C inserts is thus 2×C – that is, exactly the actual cost of C inserts.

There are two different ways of working out amortized costs: credit and debt based approaches.
In a credit based approach, you charge a cost to perform an expensive operation; and each time you do the cheap operation, you charge it extra time, earning some quantity of time as credit. If you can show that you’ll never perform the expensive operation without having enough credit to
pay for it, then that cost you charged to the cheap operation is the amortized cost.

The credit based approach is similar – but you do the expensive operation first. So, if you don’t owe
any time, you can perform the expensive operations. But when you do, you acquire a debt, and while you’re
in debt, you can’t perform the expensive operation. Each time you perform the cheap operation, you charge
it more than it actually costs to perform, and pay off part of the debt. If you can show that you never
need to perform the expensive operation while in debt, then the cost you assigned to the cheap operation
is the amortized cost.

Let’s look at an example of pre-payment. We start with a list of size N, and start adding. Each
inexpensive operation really costs 1, but we’ll charge it as 2. So each time we do a cheap add, we get 1
unit of credit. After N additions, we’ll have accumulated N units of credit. Then we need to reallocate
the list – which costs N. So we’re in the clear. The amortized cost of inserts is 2 – or O(1).

0 thoughts on “Amortized Complexity – a Tool for Graph Algorithms (among others)

  1. bigTom

    This is very similar to using a buffer (usually thought of as an aide to I/O performance). Normal practice is to choose a fixed buffer size, and then when it is filled (or close to being filled), you do the expensive update. Now with amortized cost would you come up with a more optimum strategy (perhaps the buffer size increases with time)?

    Reply
  2. Tyler DiPietro

    I could imagine a similar technique for stencil buffered shadow volume rendering. Specifically, off the top of my head, you may be able to deal with peripheral occlusion problems by updating a similar data structure and rendering it to the stencil buffer. I should probably put more thought into that.

    Reply
  3. llewelly

    *(list + position)

    I’m so glad you didn’t succumb to the temptation to save two chars and two spaces and write the terse and traditional but confusing:

    list[position]

    Joking aside, fine article on amortized complexity, an issue I first encountered in 1996 or so, reading about what would soon become the standard C++ library, as it uses amortized complexity to express some of its resource consumption requirements. For example, if v is a std::vector , v.push_back(t) must have O(1) amortized complexity.

    Reply
  4. Vinay Sachdev

    I had read some of your old blogs like “Turing Equivalent vs. Turing Complete” , “Basic Computational Complexity” etc and they were really excellent.But lately i have lost track due to time constraints.Today i just stumbled on it again and i enjoyed it very much.
    Can you suggest some good maths books for programmers?
    Basically i am in development of Statistical product.

    Reply
  5. Mark C. Chu-Carroll

    llewelly:
    I wanted to make it clear what array access meant. For any real program, I would use “[]”. But “[]” doesn’t express the
    idea of array access as a simple offset computation as
    concretely as “array + position”. (“[]” is very opaque; it
    just says “look up the index”; that could mean simple offset, but it could also mean “lookup in a hashtable”, etc.)

    Reply
  6. bearophile

    You say to double the size of the old array, but I have seen that for many appends (in C-like code) a geometric growth factor of 1.4 or 1.3 is faster. How is this possible?

    Reply
  7. Daniel

    How would this compare to the efficiency of an hash table? Both seem to have O(1) access… but I’m finding hard to compare insertion…

    Reply
  8. Mark C. Chu-Carroll

    Daniel:
    Hash tables are pretty different. A good hash table is also
    O(1) ideally; (often, because of imperfect hash functions, it’s slightly worse than that); but also, there are constants. An array lookup consists of “Get base address; add offset; fetch/store.” A hash table access is a lot more complicated: compute hash code, find bucket, search bucket. Hash tables also have worse locality properties.

    Reply
  9. Mark C. Chu-Carroll

    bearophile:
    I’ve seen a lot of code that uses lower growth rates – 1.5, or even less. With that, you’re doing copies more frequently. The end result is less memory use, but the amortized performance is worse. If I’m doing my math right (which is not at all certain when I’m doing it in my head…), the worst-case amortized cost of growth rate 1.5 is O(lg(n)), instead of O(1).
    The catch is that most of the time, lists really grow towards a maximum bound. It’s going to grow some number of times – but the growth generally becomes less frequent, and eventually stops. So the more frequent copying of a lower growth rate can pay off in the end by wasting less memory, and thereby increasing overall data locality.

    Reply
  10. Carl Witty

    Mark (and bearophile):
    For any given growth factor k, the amortized cost is still O(1). The cost associated only with the recopying varies something like k/(k-1), so using a growth rate of 1.5 costs 50% more than a growth rate of 2. (This assumes that allocating a size-M array has a cost of M.) The advantage of using a smaller growth rate is less memory wasted in the worst case.
    It would surprise me if a growth factor of 1.3 was ever faster than a growth factor of 2, unless your distribution of expected final array sizes is rather peculiar.

    Reply
  11. Simon Farnsworth

    Carl:
    A small growth factor is faster under some circumstances on OSes with poor overcommit handling. In these OSes, space is always allocated for every page you request, either in the pagefile, or in RAM; if you’re using most of physical RAM, the cost of the accounting can kill you.
    Further, some of these OSes assume that you’re going to use an allocation shortly after you claim it; they therefore induce disk thrash by swapping out enough stuff to let you use the entire allocation, then swapping it straight back in to let you use it.
    Finally, many allocator algorithms cope badly with some patterns of address space fragmentation; for example, on a 16-bit system with a 32-bit address space, it’s not uncommon to compare the low 16 bits of the size requested with the low 16 bits of the block you’re thinking about allocating (as most allocations are small). In this setup, as soon as the sizes go above 64KByte, a factor of 2 makes the allocator check every old block that you’re discarding. A factor of (e.g.) 1.5 prevents this.

    Reply
  12. Vinicius de Sa

    It seems to me that the amortized cost is O(log n) — and not O(1) — for whatever growth rate we use.
    Let’s say our initial array has size C, where C is some constant. We want to insert N > C elements. The growth factor is, say, k. The number of times we’ll have to copy the array contents into newly allocated arrays of size k times bigger than their predecessors is x, where x is the minimum integer that satisfies C * k^x >= N. Thus, x >= log_k(N/C). Since C is a constant, our minimum integer x is O(log_k N) = O(log N / log k) = O(log N) — given k is a constant, of course, not a function of N.
    Cheers to you, and congrats for the fine article.

    Reply
  13. wazow

    @ Vinicius: You are right that the number of times the array will have to grow is O(log n). Still the amortized cost of insertion is O(1) for every k > 1. These are two different things, that you are confusing.

    Reply

Leave a Reply