Apple just released SwiftNIO. A framework for low-level asynchronous IO in Swift. It was based on a familiar called Netty (which fun fact is written by one of the devs now working on SwiftNIO at Apple). When I heard the presentation to announce the library at try!Swift Tokyo this year, it sounded super familiar. The rationale for the library was similar to when Node.js was announced by Ryan Dahl.

SwiftNIO has already been integrated into web frameworks like Vapor, so you can benefit for free if you're using the latest in your server-side Swift projects. For many web projects, you probably won't need to use SwiftNIO directly, but if you're doing something different like the sort of things you'd use websockets for and want to use Swift, this is the framework for you.

There is some sample code included in the git repository, but I thought a tutorial friendly walkthrough explanation might be handy so here you go:

Some Paperwork First

SwiftNIO is intended to be added to your project via the swift package manager.

Add SwiftNIO to your Package.swift

// swift-tools-version:4.0
// The swift-tools-version declares the minimum version of Swift required to build this package.

import PackageDescription

let package = Package(
    name: "InfoServer",
    dependencies: [
        // Dependencies declare other packages that this package depends on.
        .package(url: "", from: "1.0.0")
    targets: [
        // Targets are the basic building blocks of a package. A target can define a module or a test suite.
        // Targets can depend on other targets in this package, and on products in packages which this package depends on.
            name: "InfoServer",
            dependencies: ["NIO"]),

Note we mention SwiftNIO in two places: one to include it as a dependency for the project, and a second time to specifically use the library in our InfoServer target. If you leave out the second one, the package manager will download and compile SwiftNIO for you, but it won't be avaiable to your code yet. This held me up for about 10 minutes as I had not spent much time using this package manager before this.

The Fun Bits

Now you get to go directly to the fun bits and decide the behavior of your server. Activity in NIO is handled by a pipeline of channel handlers, each gets a chance to handle incoming or outgoing data in the order they were setup. We'll implement one of these to give our server app some behavior. First we'll make a class, call it whatever you want, but make it implement the protocol ChannelInboundHandler. This class will handle incoming data (asynchronously of course) and give us a chance to do something about it.

class DateHandler: ChannelInboundHandler {

We'll need to add some associated types so the pipeline knows what type of data we expect to recieve and send back out. Since we're only using one handler in our simple example, we'll need to accept and produce the type ByteBuffer which is NIO's custom type for handling data efficiently. I tried using String instead, but that silently failed because there was nothing in the pipeline to convert a buffer into a string for me.

typealias InboundIn = ByteBuffer
typealias OutboundOut = ByteBuffer

Now for the actual work:

 func channelRead(ctx: ChannelHandlerContext, data: NIOAny) {
    let message = "Date is \(Date())\n"
    var buffer = message.utf8.count)
    buffer.write(string: message)

    ctx.write(self.wrapOutboundOut(buffer), promise: nil)

channelRead is the function that gets executed any time our inbound handler revieves data. The data parameter there is our ByteBuffer type object wrapped in a NIO-specific version of Any. For our example here, we don't even care what's in there! We only care that someone tried to send us data, not what they actually sent. Regardless of the incoming message, we bundle up a string into a buffer and write that out through our channel context. We use nil for the promise parameter because we don't care if there are any errors or when it finishes. In real life you might want to do something after the data is sent, like if you needed to send the data back in chunks because it was a whole gigabyte of information, but for now we don't care at all.

The ChannelInboundHandler protocol requires two more methods, so we begrudgingly add some bare minimum implementations:

func channelReadComplete(ctx: ChannelHandlerContext) {

func errorCaught(ctx: ChannelHandlerContext, error: Error) {
  print("error: \(error)")
  ctx.close(promise: nil)

When the channel finishes reading, we tell the context to flush. This sends a flush event down to any other handlers and eventually attempts to write the data we've supplied to the outbound socket. We also need to handle errors, so an easy way is just to print the error and ditch the connection. This is a bad idea in real life, but this is just play time, so it's fine.

The rest of the code needed to make things actually work is fairly cookie cutter. Most projects will need something very similar to it. It does seem like a bunch of boilerplate that could be avoided, but it's also an opportunity to control the important details of your important high-performance server app.

let group = MultiThreadedEventLoopGroup(numThreads: System.coreCount)
let bootstrap = ServerBootstrap(group: group)
  .serverChannelOption(ChannelOptions.backlog, value: 256)
  .serverChannelOption(ChannelOptions.socket(SocketOptionLevel(SOL_SOCKET), SO_REUSEADDR), value: 1)
  .childChannelInitializer { channel in
    channel.pipeline.add(handler: DateHandler())
  .childChannelOption(ChannelOptions.socket(IPPROTO_TCP, TCP_NODELAY), value: 1)
  .childChannelOption(ChannelOptions.socket(SocketOptionLevel(SOL_SOCKET), SO_REUSEADDR), value: 1)
  .childChannelOption(ChannelOptions.maxMessagesPerRead, value: 16)
  .childChannelOption(ChannelOptions.recvAllocator, value: AdaptiveRecvByteBufferAllocator())

defer {
  try! group.syncShutdownGracefully()

let channel = try bootstrap.bind(host: "localhost", port: 8080).wait()

print("Server is alive!")

try channel.closeFuture.wait()

print("Server closed")

First we need to create a group and for now there's only one option for the type of group so it's an easy choice, here we can limit the number of threads we want our app to use. My app uses one for each core in the server's CPU because I want it to use the whole machine for the best performance. You could limit yours to only one thread maybe if you wanted some small worker process instead. Next we make a ServerBootstrap with our group. The bootstrap object lets us specify all kinds of settings from socket-level things that should look familiar if you've ever done socket programming down to even specifying a specific buffer allocator you'd like to use. If hardcore low level performance is important, it's possible you'll need your own custom allocator. If you don't know what these options do, it's probably pretty safe to just use the ones I used until you get told otherwise. This is also where we plug in our channel handler, so don't forget that. The childChannelInitializer function creates the channel and we add our handler to its pipeline. If you miss this step, your server will start up with no behavior which is probably not what you want for the type of group.

Lastly we bind our bootstrap to localhost and run the server. This is just a fun demo so I didn't see any need for anything short of a good old fashioned hard-code. We defer the attempt to gracefully shutdown our thread group because we want it to happen even if any of the functions below it throw errors.

And there you have it! A working amazing sever side app which very quickly and efficiently tells you the time and date any time you send literally anything over a socket connection. You can test it by running telnet from your terminal and type things to your server app. Fun, right?