Archive for the ‘hacking’ Category

Enterprise distro kernels

Sunday, June 24th, 2007

Greg K-H wrote recently about kernels for “Enterprise Linux” distributions. I’m not sure I get the premise of the article; after all, the whole point of having more than one distro company is that they can compete on the basis of the differences in what they do. So it makes no sense to me to present this issue as something that Red Hat and Novell have to agree on (and it also leaves out Ubuntu’s “LTS” distribution, although I’m not sure if that has taken any of the “enterprise distro” market). Obviously Novell sees a reason for both openSUSE and SLES; why should SLES and RHEL have to be identical?

In fact (although Greg didn’t seem to realize it when he wrote his article), there are already significant differences between the SLES and RHEL kernel updates. SLES has relatively infrequent “SP” releases, where the kernel ABI is allowed to break, while RHEL has update releases roughly every quarter but aims to keep the kernel ABI stable through the life of a full major release.

Greg seems to favor the third proposal in his article, namely rebasing to the latest upstream kernel on every update. However, I don’t think that can work for enterprise distros, for a reason that DaveJ alluded to in his response:

W[ith] each upstream point revision, we fix x regressions, and introduce y new ones. This isn’t going to make enterprise customers paying lots of $ each year very happy.

For a lot of customers, the whole point of staying on an enterprise distro is to stick with something that works for them. No kernel is bug-free and every enterprise distro kernel surely has some awful bugs; what enterprise customers want to avoid are regressions. If SLES10 works for my app on my hardware, then SLES10SP1 better not keel over on the same app and the same hardware because of a broken SATA driver or something like that.

Of course customers often want crazy-sounding stuff, for example, “Give me the 2.6.9 kernel from RHEL4, except I want the InfiniBand drivers from 2.6.21.” (And yes, since I work on InfiniBand a lot, that is definitely a real example, and in fact a lot of effort goes into the “OpenFabrics Enterprise Distribution” (OFED) to make those customers happy) A kernel hacker’s first reaction to that request is most likely, “Then you should run just 2.6.21.” But if you think about what the customers are asking for some more, it starts to make sense. What they are really saying is that they need the latest and greatest IB features (maybe support for new hardware or a protocol that wasn’t implemented until long after the enterprise kernel was frozen), but they don’t want to risk some new glitch in a part of the kernel where RHEL4’s 2.6.9 is perfectly fine for them.

This is just a special case of Greg’s “support the latest toy” request, and if there were some technical solution for pulling just a subset of new features into an enterprise kernel then that would be great. But as I said before, without a major change in the upstream development process, rebasing enterprise kernels during the lifetime of a major release doesn’t seem to be what customers of enterprise distros want. And I agree with Linus when he says that you can’t slow down development without having people losing interest or going off onto a branch that’s too unstable for real users to test. So I don’t think we want to change our development process to be closer to an enterprise distro.

And given how new features often have dependencies on core kernel changes, I don’t see much hope of a technical solution for the “latest toy” problem. In fact the OFED solution of having the community that works on a particular class of new toys do the backporting seems to be about the best we can do for now.

Are we having, like, a conversation?

Thursday, June 7th, 2007

Pete replied to my earlier reply and argued that mb() should never be used:

NOOOOOOOOOOO!

In theory, it’s possible to program in Linux kernel by using nothing but mb(). In practice, expirience teaches us that every use of mb() in drivers is a bug. I’m not kidding here. For some reason, even competent and experienced hackers cannot get it right.

I agree that memory barriers should be avoided, which is why I said that “a fat comment about why you need to mess with memory barriers anyway” should always go along with a barrier. However

  1. When a memory barrier is required, I don’t think that using a spinlock as an obfuscated stand-in is an improvement. If a spinlock is just serving as a memory barrier, then you probably don’t have a clear idea of what data the spinlock is protecting. And that’s going to lead to problems when someone does something like expanding the locked region by moving where the spinlock is taken.
  2. Spinlocks actually don’t save you from having your driver blow up on big machines. The number of people that understand mmiowb() is far smaller than the number of people that understand mb(), but no matter how many spinlocks you lock, you may still need mmiowb() for your driver to work on SGI boxes. (Actually I think this is a problem with the interface presented to drivers: someday when I really do feel like tilting at a windmill, I’ll try to kill off mmiowb())

It’s funny: when I read Pete’s entry, I did a quick grep of drivers/usb looking for mb() to show that even non-exotic device drivers need memory barriers. And the first example I pulled up at random, in drivers/usb/host/uhci-hcd.c looks like a clear bug:

        /* End Global Resume and wait for EOP to be sent */
        outw(USBCMD_CF, uhci->io_addr + USBCMD);
        mb();
        udelay(4);
        if (inw(uhci->io_addr + USBCMD) & USBCMD_FGR)
                dev_warn(uhci_dev(uhci), "FGR not stopped yet!n");

The mb() after the outw() does not make sure that the IO reaches the device, since it is probably a posted PCI write which might lurk in a chipset buffer somewhere long after the CPU has retired it. The only way to make sure that the write has actually reached the device is to do a read from the same device to flush the posted write. So the udelay() might expire before the previous write has even reached the device. I guess I’ll be a good citizen and report this on the appropriate mailing list.

An example of what I consider a proper use of a memory barrier in a device driver is in my masterpiece drivers/infiniband/hw/mthca/mthca_cq.c:

        cqe = next_cqe_sw(cq);
        if (!cqe)
                return -EAGAIN;

        /*
         * Make sure we read CQ entry contents after we've checked the
         * ownership bit.
         */
         rmb();

The device DMAs a status block into our memory and then sets a bit in that block to mark it as valid. If we check the valid bit, then we need an rmb() to make sure that we don’t operate on bogus data because the CPU has speculatively executed the reads of the rest of the status block before it was written and then checked the valid bit after it was set (and yes, this was actually seen happening on the PowerPC 970). I can’t think of any reasonable way to write this with a spinlock. Would I just create a dummy spinlock to take and drop for no reason other than memory ordering?

Of course this is driving exactly the sort of high-speed exotic hardware that Pete talked about in his blog, but I think the principle applies to any hardware that DMAs structures to or from host memory. For example, tg3_rx() in drivers/net/tg3.c seems to have a barrier for similar reasons. If you’re writing a driver for DMA-capable hardware, you better have some understanding of memory ordering.

If I’m a crusader, where’s my cape?

Tuesday, June 5th, 2007

Pete Zaitcev replied to my earlier post about misuse of atomic variables. I never really thought of myself as a crusader or as particularly quixotic, but I’ll respond to the technical content of Pete’s post. I think the disagreement was with my disparagement of the anti-pattern:

int x;

int foo(void)
{
        int y;

        spin_lock(&lock);
        y = x;
        spin_unlock(&lock);

        return y;
}

Pete is entirely correct that spin_lock() and spin_unlock() have full memory barriers; otherwise it would be impossible for anyone to use spinlocks correctly (and of course there’s still mmiowb() to trip you up when someone runs your driver on a big SGI machine).

However, I still think that using a spinlock around an assignment that’s atomic anyway is at best pretty silly. If you just need a memory barrier, then put an explicit mb() (or wmb() or rmb()) there, along with a fat comment about why you need to mess with memory barriers anyway.

Also, Pete is not entirely correct when he says that atomic operations lack memory barriers. All atomic operations that return a value (eg atomic_dec_and_test()) do have a full memory barrier, and if you want a barrier to go with an atomic operation that doesn’t return a value, such as atomic_inc(), the kernel does supply a full range of primitives such as smp_mb__after_atomic_inc(). The file Documentation/memory-barriers.txt in the kernel source tree explains all of this in excruciating detail.

In the end the cost of an atomic op is roughly the same as the cost of a spin_lock()/spin_unlock() pair (they both have to do one locked operation, and everything else is pretty much in the noise for all but the most performance criticial code). Spinlocks are usually easier to think about, so I recommend only using atomic_t when it fits perfectly, such as a reference count (and even then using struct kref is probably better if you can). I’ve found from doing code review that code using atomic_t almost always has bugs, and we don’t have any magic debugging tools to find them (the way we have CONFIG_PROVE_LOCKING, CONFIG_DEBUG_SPINLOCK_SLEEP and so on for locks).

By the way, what’s up with mark mazurek? His blog seems to be an exact copy taken from Pete’s blog feed, with the added bonus of adding a comment to the post in my blog that Pete linked to. There are no ads or really anything beyond an exact duplicate of Pete’s blog, so I can’t figure out what the angle is.

Atomic cargo cults

Sunday, May 13th, 2007

Cargo cult programming refers to “ritual inclusion of code or program structures that serve no real purpose.” One annoying example of this that I see a lot in kernel code that I review is the inappropriate use of atomic_t, in the belief that it magically wards off races.

This type of bogosity is usually marked by variables or structure members of type atomic_t, which are only ever accessed through atomic_read() and atomic_set() without ever using a real atomic operation such as atomic_inc() or atomic_dec_and_test(). Such programming reaches its apotheosis in code like:

        atomic_set(&head, (atomic_read(&head) + 1) % size);

(and yes, this is essentially real code, although I’ve paraphrased it to protect the guilty from embarrassment).

The only point of atomic_t is that arithmetic operations (like atomic_inc() et al) are atomic in the sense that there is no window where two racing operations can step on each other. This helps with situations where an int variable might have ++i and --i race with each other; since these operations are probably implemented as read-modify-write. So if i starts out as 0, the value of i after the two operations might be 0, -1 or 1 depending on when each operation reads the value of i and what order they write back the final value.

If you never use any of these atomic operations, then there’s no point in using atomic_set() to assign a variable and atomic_read() to read it. Your code is no more or less safe than if would be just using normal int variables and assignments.

One way to think about atomic operations is that they might be an optimization instead of using a lock to protect access to a variable. Unfortunately I’m not sure if this will help anyone get things right, since I also see plenty of code like:

int x;

int foo(void)
{

        int y;

        spin_lock(&lock);
        y = x;
        spin_unlock(&lock);

        return y;
}

which is the analogous cargo-cult anti-pattern for spinlocks.

Maybe the best rule would be: if the only functions using atomic_t in your code are atomic_set() and atomic_read(), then you need to write a comment explaining why you’re using atomic at all. Since the vast majority of the time, such a comment will be impossible to write, maybe this will cut down on cargo cult programming a bit. Or more likely it will just make code review more fun by generating nonsensical comments for me to chuckle at.

Oh what a relief it is

Thursday, November 16th, 2006

I just converted the main repositories for two libraries that I maintain, libibverbs and libmthca, from subversion to git. And even though I work with git a lot when working on the kernel, it’s still a shock to see how much easier git makes everything.

A few amusing examples:

$ du -sh libibverbs.*
980K    libibverbs.git
1.3M    libibverbs.svn

Yes, the git checkout with full development history takes up less disk space than a svn working copy with just the tip of the trunk (accessing history goes over the network for svn). And svn needing the network to do stuff has implications beyond just “work on my laptop on a plane”:

$ cd libibverbs.svn
$ time svn log > /dev/null

real    0m0.820s
user    0m0.032s
sys     0m0.004s

$ cd ../libibverbs.git
$ time git log > /dev/null

real    0m0.005s
user    0m0.004s
sys     0m0.000s

Yes, git is more than 100 times faster for showing the log!

And these performance differences make a real productivity difference. With git, I’m much more likely to look through the history, examine past changes, and so on, which means that I waste less time figuring out things that I used to know.

2.6.19 merge plans for InfiniBand/RDMA

Thursday, August 17th, 2006

Here’s a short summary of what I plan to merge for 2.6.19. I sent this out via email to all the relevant lists, but I figured it can’t hurt to blog it too. Some of this is already in infiniband.git (git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband.git), while some still needs to be merged up. Highlights:

  • iWARP core support. This updates drivers/infiniband to work with devices that do RDMA over IP/ethernet in addition to InfiniBand devices. As a first user of this support, I also plan to merge the amso1100 driver for Ammasso RNICs.I will post this for review one more time after I pull it into my git tree for last minute cleanups. But if you feel this iWARP support should not be merged, please let me know why now.
  • IBM eHCA driver, which supports IBM pSeries-specific InfiniBand hardware. This is in the ehca branch of infiniband.git, and I will post it for review one more time. My feeling is that more cleanups are certainly possible, but this driver is “good enough to merge” now and has languished out of tree for long enough. I’m certainly happy to merge cleanup patches, though.
  • mmap()ed userspace work queues for ipath. This is a performance enhancement for QLogic/PathScale HCAs but it does touch core stuff in minor ways. Should not be controversial.
  • I also have the following minor changes queued in the for-2.6.19 branch of infiniband.git:
       Ishai Rabinovitz:
             IB/srp: Add port/device attributes

       James Lentini:
             IB/mthca: Include the header we really want

       Michael S. Tsirkin:
             IB/mthca: Don't use privileged UAR for kernel access
             IB/ipoib: Fix flush/start xmit race (from code review)

       Roland Dreier:
             IB/uverbs: Use idr_read_cq() where appropriate
             IB/uverbs: Fix lockdep warning when QP is created with 2 CQs

Why you shouldn’t use __attribute__((packed))

Monday, July 31st, 2006

gcc supports an extension that allows structures or structure members to be marked with __attribute__((packed)), which tells gcc to leave out all padding between members. Sometimes you need this, when you want to make sure of the layout of a structure. For example, you might have something like

    struct my_struct {
            uint8_t  field1;
            uint16_t field2;
    } __attribute__((packed));

Without the packed attribute, the struct will have padding between field1 and field2, and that/s no good if this struct is something that has to match hardware or be sent on a wire.

However, it’s actively harmful to add the attribute to a structure that’s already going to be laid out with no padding. Sometimes, when a structure needs to be laid out without padding (because of hardware or wire protocol), people are tempted to add the attribute to a struct like the following “just to let the compiler know”

    struct my_struct {
            uint32_t  field1;
            uint32_t field2;
    };

But adding __attribute__((packed)) goes further than just telling gcc that it shouldn’t add padding — it also tells gcc that it can’t make any assumptions about the alignment of accesses to structure members. And this leads to disastrously bad code on some architectures.

To see this, consider the simple code

    struct foo { int a; };
    struct bar { int b; } __attribute__((packed));

    int c(struct foo *x) { return x->a; }
    int d(struct bar *x) { return x->b; }

On architectures like x86, x86-64 and powerpc, both functions generate the same code. But take a look at what happens on ia64:

   0000000000000000 <c>:
      0:       13 40 00 40 10 10       [MBB]       ld4 r8=[r32]
      6:       00 00 00 00 10 80                   nop.b 0x0
      c:       08 00 84 00                         br.ret.sptk.many b0;;

   0000000000000010 <d>:
     10:       09 70 00 40 00 21       [MMI]       mov r14=r32
     16:       f0 10 80 00 42 00                   adds r15=2,r32
     1c:       34 00 01 84                         adds r32=3,r32;;
     20:       19 80 04 1c 00 14       [MMB]       ld1 r16=[r14],1
     26:       f0 00 3c 00 20 00                   ld1 r15=[r15]
     2c:       00 00 00 20                         nop.b 0x0;;
     30:       09 70 00 1c 00 10       [MMI]       ld1 r14=[r14]
     36:       80 00 80 00 20 e0                   ld1 r8=[r32]
     3c:       f1 78 bd 53                         shl r15=r15,16;;
     40:       01 00 00 00 01 00       [MII]       nop.m 0x0
     46:       e0 70 dc ee 29 00                   shl r14=r14,8
     4c:       81 38 9d 53                         shl r8=r8,24;;
     50:       0b 70 40 1c 0e 20       [MMI]       or r14=r16,r14;;
     56:       f0 70 3c 1c 40 00                   or r15=r14,r15
     5c:       00 00 04 00                         nop.i 0x0;;
     60:       11 00 00 00 01 00       [MIB]       nop.m 0x0
     66:       80 78 20 1c 40 80                   or r8=r15,r8
     6c:       08 00 84 00                         br.ret.sptk.many b0;;

gcc gets scared about unaligned accesses and generates six times as much code (96 bytes vs. 16 bytes)! sparc64 goes similarly crazy, bloating from 12 bytes to 52 bytes:

   0000000000000000 <c>:
      0:       81 c3 e0 08     retl
      4:       d0 42 00 00     ldsw  [ %o0 ], %o0
      8:       30 68 00 06     b,a   %xcc, 20 <d>

   0000000000000020 <d>:
     20:       c6 0a 00 00     ldub  [ %o0 ], %g3
     24:       c2 0a 20 01     ldub  [ %o0 + 1 ], %g1
     28:       c4 0a 20 02     ldub  [ %o0 + 2 ], %g2
     2c:       87 28 f0 18     sllx  %g3, 0x18, %g3
     30:       d0 0a 20 03     ldub  [ %o0 + 3 ], %o0
     34:       83 28 70 10     sllx  %g1, 0x10, %g1
     38:       82 10 40 03     or  %g1, %g3, %g1
     3c:       85 28 b0 08     sllx  %g2, 8, %g2
     40:       84 10 80 01     or  %g2, %g1, %g2
     44:       90 12 00 02     or  %o0, %g2, %o0
     48:       81 c3 e0 08     retl
     4c:       91 3a 20 00     sra  %o0, 0, %o0
     50:       30 68 00 04     b,a   %xcc, 60 <d+0x40>

So the executive summary is: don’t add __attribute__((packed)) to your code unless you know you need it.