Presented by

  • Russell Currey

    Russell Currey
    @russelldotcc
    https://russell.cc

    Russell is a Linux & open source hacker based in Canberra, Australia. He works at IBM OzLabs on various things for the Power platform, including kernel hardening and OpenPower platform continuous integration. He is the founder of the snowpatch project and can most commonly be found complaining, drinking tea and playing RuneScape (still).

Abstract

The Linux kernel does a lot of stuff, and runs on a lot of stuff. I'm sure we can all agree that this is a good thing, however the matrix of stuff it does and stuff it runs on continues to get bigger and bigger! With thousands of commits each release and a widely distributed and decentralised developer community, how do we make sure that the kernel still works on everything, does everything it's supposed to do, and hasn't slowed anything down in the process? In this session we're going to be looking at the huge variety of automated kernel testing projects to figure out what's going on, covering a variety of different areas, including: - per-patch CI to quickly test if a developer broke something, - built-in kernel selftests and the push for more unit testing, - performance testing of the kernel itself and userspace, - regression testing, especially for known security issues, - hardware testing, from enormous 512TB machines to huge farms of small SOCs. By understanding the huge web of projects out there, hopefully we can figure out how we could get more stuff done more effectively. It's a difficult problem in the broad and uncoordinated space of Linux kernel development, but it's all in pursuit of the dream: the magical fantasy land - with no duplication of code or effort, where everything is tested, where everyone knows where everything is, and where bugs are never introduced again.