Archive for July, 2006

Why you shouldn’t use __attribute__((packed))

Monday, July 31st, 2006

gcc supports an extension that allows structures or structure members to be marked with __attribute__((packed)), which tells gcc to leave out all padding between members. Sometimes you need this, when you want to make sure of the layout of a structure. For example, you might have something like

    struct my_struct {
            uint8_t  field1;
            uint16_t field2;
    } __attribute__((packed));

Without the packed attribute, the struct will have padding between field1 and field2, and that/s no good if this struct is something that has to match hardware or be sent on a wire.

However, it’s actively harmful to add the attribute to a structure that’s already going to be laid out with no padding. Sometimes, when a structure needs to be laid out without padding (because of hardware or wire protocol), people are tempted to add the attribute to a struct like the following “just to let the compiler know”

    struct my_struct {
            uint32_t  field1;
            uint32_t field2;

But adding __attribute__((packed)) goes further than just telling gcc that it shouldn’t add padding — it also tells gcc that it can’t make any assumptions about the alignment of accesses to structure members. And this leads to disastrously bad code on some architectures.

To see this, consider the simple code

    struct foo { int a; };
    struct bar { int b; } __attribute__((packed));

    int c(struct foo *x) { return x->a; }
    int d(struct bar *x) { return x->b; }

On architectures like x86, x86-64 and powerpc, both functions generate the same code. But take a look at what happens on ia64:

   0000000000000000 <c>:
      0:       13 40 00 40 10 10       [MBB]       ld4 r8=[r32]
      6:       00 00 00 00 10 80                   nop.b 0x0
      c:       08 00 84 00                         br.ret.sptk.many b0;;

   0000000000000010 <d>:
     10:       09 70 00 40 00 21       [MMI]       mov r14=r32
     16:       f0 10 80 00 42 00                   adds r15=2,r32
     1c:       34 00 01 84                         adds r32=3,r32;;
     20:       19 80 04 1c 00 14       [MMB]       ld1 r16=[r14],1
     26:       f0 00 3c 00 20 00                   ld1 r15=[r15]
     2c:       00 00 00 20                         nop.b 0x0;;
     30:       09 70 00 1c 00 10       [MMI]       ld1 r14=[r14]
     36:       80 00 80 00 20 e0                   ld1 r8=[r32]
     3c:       f1 78 bd 53                         shl r15=r15,16;;
     40:       01 00 00 00 01 00       [MII]       nop.m 0x0
     46:       e0 70 dc ee 29 00                   shl r14=r14,8
     4c:       81 38 9d 53                         shl r8=r8,24;;
     50:       0b 70 40 1c 0e 20       [MMI]       or r14=r16,r14;;
     56:       f0 70 3c 1c 40 00                   or r15=r14,r15
     5c:       00 00 04 00                         nop.i 0x0;;
     60:       11 00 00 00 01 00       [MIB]       nop.m 0x0
     66:       80 78 20 1c 40 80                   or r8=r15,r8
     6c:       08 00 84 00                         br.ret.sptk.many b0;;

gcc gets scared about unaligned accesses and generates six times as much code (96 bytes vs. 16 bytes)! sparc64 goes similarly crazy, bloating from 12 bytes to 52 bytes:

   0000000000000000 <c>:
      0:       81 c3 e0 08     retl
      4:       d0 42 00 00     ldsw  [ %o0 ], %o0
      8:       30 68 00 06     b,a   %xcc, 20 <d>

   0000000000000020 <d>:
     20:       c6 0a 00 00     ldub  [ %o0 ], %g3
     24:       c2 0a 20 01     ldub  [ %o0 + 1 ], %g1
     28:       c4 0a 20 02     ldub  [ %o0 + 2 ], %g2
     2c:       87 28 f0 18     sllx  %g3, 0x18, %g3
     30:       d0 0a 20 03     ldub  [ %o0 + 3 ], %o0
     34:       83 28 70 10     sllx  %g1, 0x10, %g1
     38:       82 10 40 03     or  %g1, %g3, %g1
     3c:       85 28 b0 08     sllx  %g2, 8, %g2
     40:       84 10 80 01     or  %g2, %g1, %g2
     44:       90 12 00 02     or  %o0, %g2, %o0
     48:       81 c3 e0 08     retl
     4c:       91 3a 20 00     sra  %o0, 0, %o0
     50:       30 68 00 04     b,a   %xcc, 60 <d+0x40>

So the executive summary is: don’t add __attribute__((packed)) to your code unless you know you need it.


Sunday, July 16th, 2006

After a smooth trip and suprisingly faster-than-scheduled trip, I’m safely in Ottawa, ready for the Kernel Summit and OLS. If you see me, say hello (of course I would expect you to do that even if you don’t read my blog).

By the way, the network in my hotel seems to have a broken DNS server that doesn’t respond to AAAA requests. This makes Firefox take forever to do anything, because it tries an IPv6 lookup that has to time out before it finds a site’s regular IPv4 address. However, I discovered the network.dns.disableIPv6 configuration option – setting this to true cured my browsing problems.

What is this thing called RDMA?

Thursday, July 13th, 2006

A good way to kick this blog off is probably to explain what this RDMA stuff that I work on really is.
RDMA stands for Remote Direct Memory Access, but the term “RDMA” is usually used to refer to networking technologies that have a software interface with three features:

  • Remote direct memory access (Remote DMA)
  • Asynchronous work queues
  • Kernel bypass

InfiniBand host channel adapters (HCAs) are an example of network adapters that offer such an interface, but RDMA over IP (iWARP) adapters are starting to appear as well.

Anyway, let’s take a look at what these three features really mean.

Remote DMA

Remote DMA is pretty much what it sounds like: DMA on a remote system. The adapter on system 1 can send a message to the adapter on system 2 that causes the adapter on system 2 to DMA data to or from system 2’s memory. The messages come in two main types:

  • RDMA Write: includes an address and data to put at that address, and causes the adapter that receives it to put the supplied data at the specified address
  • RDMA Read: includes an address and a length, and causes the adapter that receives it to generate a reply that sends back the data at the address requested.

These messages are “one-sided” in the sense that they will be processed by the adapter that receive them without involving the CPU on the system that receives the messages.

Letting a remote system DMA into your memory sounds pretty scary, but RDMA adapters give fine-grained control over what remote systems are allowed to do. Going into the details now will make this entry way too long, so for now just trust me that things like protection domains and memory keys allow you to control connection-by-connection and byte-by-byte with separate read and write permissions.

To see why RDMA is useful, you can think of RDMA operations as “direct placement” operations: data comes along with information about where it’s supposed to go. For example, there is a spec for NFS/RDMA, and it’s pretty easy to see why RDMA is nice for NFS. The NFS/RDMA server can service requests in whatever order it wants and return responses via RDMA as they become available; by using direct placement, the responses can go right into the buffers where the client wants them, without requiring the NFS client to do any copying of data.

(There are actually some more complicated RDMA operations that are supported on InfiniBand, namely atomic fetch & add, and atomic compare & swap, but those aren’t quite as common so you can ignore them for now)

Asynchronous work queues

Software talks to RDMA adapters via an aynchronous interface. This doesn’t really have all that much to do with remote DMA, but when we talk about RDMA adapters, we expect this type of interface (which is called a “verbs” interface for some obscure historical reason).

Basically, to use an RDMA adapter, you create objects called queue pairs (or QPs), which as the name suggests are a pair of work queues: a send queue and a receive queue, and completion queues (or CQs). When you want to do something, you tell the adapter to post an operation to one of your work queues. The operation executes asynchronously, and when it’s done, the adapter adds work completion information onto the end of your CQ. When you’re ready, you can go retrieve completion information from the CQ to see which requests have completed.

Operating asynchronously like this makes it easier to overlap computation and communication.

(Incidentally, as the existence of receive queues might make you think, RDMA adapters support plain old “two-sided’ send/receive operations, in addition to one-sided RDMA operations. You can post a receive request to your local receive queue, and then next send message that comes in will be received into the buffer you provided. RDMA operations and send operations can be mixed on the same send queue, too)

Kernel bypass

The last feature which is common to RDMA adapters also has nothing to do with remote DMA per se. But RDMA adapters allow userspace processes to do fast-path operations (posting work requests and retreiving work completions) directly with the hardware without involving the kernel at all. This is nice because these adapters are typically used for high-performance, latency-sensitive applications, and saving the system call overhead is a big win when you’re counting nanoseconds.

OK, that’s it for today’s edition of “RDMA 101.” Now we have some background to talk about some of the more interesting features of RDMA networking.

Where does it end? It ends right here.

Tuesday, July 11th, 2006

I couldn’t begin to guess what Gillette will put in the razor they come out with after their latest Gillette Fusion Power, since its hard to imagine how you top a razor with five blades and a “patented on-board micro-chip” that “optimizes the performance of the razor.”

But I bet the next Gillette razor, whatever it is, will run Linux.