Ben will talk about building unix command line tools in Haskell. He will talk about some of the standards and traditions that commandline tools should follow, and a handful of Haskell libraries that help make that happen - including command line option parsing, pretty colours, and interacting nicely with other tools in a build chain.
* command line tools: - you run them at the command line - like in a shell terminal window - pipes - composing things together. which as haskellers we of course love. - exit codes - but they're also used in other places: called by other programs. a very common example is build pipelines in CI systems; another might be one program calling out to eg `git`. So bear these scenarios in mind as we go through. - contrast: yes I can launch firefox from the commandline, but the interactions are very different: there's a GUI and you click on stuff.
* I'm going to talk about libraries to do this things, and a little bit of the unix philosophy that goes with each concept as we go along.
Someone (Doug McIlroy / Peter H. Salus?):
* stdin, stdout and stderr - especially stdout vs stderr - "output" goes on stdout, "errors" go on stderr - some decent rule of thumb? like if you were piping it into another program, what data would you want to go into that other program, and what data would you like to go to a human reading the console/ logs. - how do I output a password prompt? (for example...) is that stderr or do I wire into something else? - can i change behaviour based on if stdin is a tty? (that's a convention, but I'm not sure where its documented and if I can easily do that in Haskell) - pipes: stdin/out don't just go to/from the terminal pseudo-Haskell: stream -> stream composition of parallel processes, with a stderr side stream, and exit/error handling that aborts the lot though a program can also do "anything" / "mutate the world" so emphasis is not on stopping "mutate the world" but on interacting with the world as others expect.
* exit codes When a process exits, it returns a one byte exit code: - how a tool signals an error/exception to the OS: 0 means success other values mean failure - the shitty way of `error "FOO"` gives an acceptable way of doing exit codes. be careful if you exit for other reasons to exit with failure (something i've seen in a bunch of immature tooling - docker, purescript compiler for example in years past) - make sure that in addition to printing your error message you also exit with a failure code - importance of exit codes for build pipelines (for example) - eg `Make` or travis CI give example of something (eg travis) integrated to github - that red cross comes from the exit code - and if your compiler or test suite doesn't exit that way, then the tests will come up green even though they've failed. - kinda feels like a reverse-Maybe: you can return an opaque success, or many different exit values - generally, you would use those different values to represent different kinds of error that an automated caller might like to distinguish between. or just exit with `1` if you don't have anything more interesting. There's some posixy description of what some of the higher error codes represent - somewhere? Haskell modules: System.Exit exitWith ExitSuccess :: IO _ exitWith (ExitFailure 1) :: IO _
* the environment - dynamic scope. string->string mapping. imagine it as a Reader. * a classic one is $TMPDIR - where do we store temporary files? I want to set it for a work session, for example, and have every program in that session use it, no matter how deep in the process call stack. c.f. implicit arguments in Haskell, where the value of an argument passed to a function propagates down to any called function which also has that implicit argument. import System.Environment getEnvironment :: IO [(String, String)]
* running other processes - many ways / libraries - esp interesting is capture of stdout/stderr in various ways vs passing it through import System.Process a variety of different things, from the simple: callCommand :: String -> IO () to createProcess :: CreateProcess -> IO (Maybe Handle, Maybe Handle, Maybe Handle, ProcessHandle) That more elaborate CreateProcess structure lets you specify a lot of things, such as a different environment (which would otherwise be inherited), and what to do with the std streams.
* console colours - this is the future after all. using the same control codes as used in 1980s/1990s era BBSes. and so your programs can look a bit like an 1990s BBS too! - explain control codes basically (rather than someout of band signalling like more modern graphics) - more elaborate, but I'm not going to go into this, is how you'd do a "full screen" or rather full-window application. - screenshot: diff from `git` import System.Console.ANSI setSGR [Reset, SetColor Foreground Dull Yellow] putStrLn "hello" setSGR [Reset] ^ screenshot of this hSupportsANSI IO.stdout :: IO Bool - ^ we want to be able to ask this because if we're feeding into a pipe, it is conventional to not send colour codes: colours are for the terminal, not for the next program in the pipe to consume. Can also use ANSI codes to deal with Cursor positioning and request things like console size - maybe want to truncate lines rather than have them wrap, or configure your pretty printer based on that width. - these are the same code sequences used in travis CI. and BBSes. and MS-DOS ANSI.SYS and vt100 terminals - maybe for fun include a BBS screen shot - or a pic of a vt100 - TODO: describe how these things output specific magic byte sequences to the console - they are "in-band" in that way and the library is just emitting those for you
- this is the big one * commandline parsing - the shit way - getArgs. gives an array of strings. ok if you want to really pass in just one or two mandatory arguments. ... but that's not how many tools interfaces should work. ... give examples of "ls -t ~" or "git commit -a -m hello" with a subcommand structure - I'm a big fan of writing parsers in Haskell. optparse-applicative is a parser for parsing command line options. so it has a different feel. the raw tokens are individual strings, as comes from getArgs, rather than characters. and there are some features which capture common patterns in command line parsing that a more general parser might not have. - so doing some more interesting descriptive stuff rather than just consuming. for example: - parsers with autogenerated: help text tab completion - for basic options - easy to add on more completion for option parameters