<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Tech on Bradley Falzon</title>
    <link>https://bradleyf.id.au/categories/tech/</link>
    <description>Recent content in Tech on Bradley Falzon</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-au</language>
    <lastBuildDate>Mon, 02 May 2016 08:54:54 +0930</lastBuildDate>
    <atom:link href="https://bradleyf.id.au/categories/tech/index.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>A fork of /x/net/http2 providing Server Push for Go</title>
      <link>https://bradleyf.id.au/dev/go-http2-server-push-fork/</link>
      <pubDate>Mon, 02 May 2016 08:54:54 +0930</pubDate>
      
      <guid>https://bradleyf.id.au/dev/go-http2-server-push-fork/</guid>
      <description>

&lt;h2 id=&#34;overview:237cacd2f89dba420fdba263579103cc&#34;&gt;Overview&lt;/h2&gt;

&lt;p&gt;Since version 1.6, Go has transparently supported HTTP/2 for both clients and servers when using TLS 1.2. The support was made available via the &lt;code&gt;golang.org/x/net/http2&lt;/code&gt; library for Go 1.5.&lt;/p&gt;

&lt;p&gt;What&amp;rsquo;s being documented here is a fork of &lt;code&gt;golang.org/x/net/http2&lt;/code&gt; (see below) which adds preliminary HTTP/2 Server Push support (for Go applications), tested in Google Chrome 49, 50, 52 and Firefox 46.&lt;/p&gt;

&lt;p&gt;See also &lt;a href=&#34;https://blog.cloudflare.com/announcing-support-for-http-2-server-push-2/&#34;&gt;CloudFlare&amp;rsquo;s recent announcement&lt;/a&gt; for more information on HTTP/2 Server Push, as well as the standard&amp;rsquo;s information page: &lt;a href=&#34;http://httpwg.org/specs/rfc7540.html#PushResources&#34;&gt;http://httpwg.org/specs/rfc7540.html#PushResources&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A simple demonstration is available here: &lt;a href=&#34;https://bradleyf.id.au:8443&#34;&gt;https://bradleyf.id.au:8443&lt;/a&gt; and the source code is available here: &lt;a href=&#34;https://github.com/bradleyfalzon/h2push-demo&#34;&gt;https://github.com/bradleyfalzon/h2push-demo&lt;/a&gt; (fork is available: &lt;a href=&#34;https://github.com/bradleyfalzon/net&#34;&gt;https://github.com/bradleyfalzon/net&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;https://bradleyf.id.au/img/serverPushExample.png&#34; /&gt;&lt;img src=&#34;https://bradleyf.id.au/img/serverPushExample.png&#34; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you&amp;rsquo;re on Twitter and interested in IETF Internet-Drafts and Protocols, check out &lt;a href=&#34;https://twitter.com/rfcbot&#34;&gt;https://twitter.com/rfcbot&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&#34;usage:237cacd2f89dba420fdba263579103cc&#34;&gt;Usage&lt;/h2&gt;

&lt;p&gt;Clients are required to get the fork using &lt;code&gt;go get github.com/bradleyfalzon/net/http2&lt;/code&gt; and import accordingly &lt;code&gt;import &amp;quot;github.com/bradleyfalzon/net/http2&amp;quot;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;An appropriate HTTP/2 server can be created by using the following example:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;s := &amp;amp;http.Server{
    Addr:           &amp;quot;:3003&amp;quot;,
    Handler:        nil,
    ReadTimeout:    30 * time.Second,
    WriteTimeout:   30 * time.Second,
    MaxHeaderBytes: 1 &amp;lt;&amp;lt; 20,
}
http2.ConfigureServer(s, nil)

log.Fatal(s.ListenAndServeTLS(&amp;quot;cert.pem&amp;quot;, &amp;quot;key.pem&amp;quot;))
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Within a http handler, to push a resource simply add the relative path of the resource using the &lt;code&gt;Link&lt;/code&gt; header, for example:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;w.Header().Add(&amp;quot;Link&amp;quot;, &amp;quot;&amp;lt;/static/main.css&amp;gt;; rel=preload;&amp;quot;)
w.Header().Add(&amp;quot;Link&amp;quot;, &amp;quot;&amp;lt;/static/main.js&amp;gt;; rel=preload;&amp;quot;)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Use Google Chrome 52 (currently a Canary release) to better view pushed resources (although earlier releases do support
pushed resources, they just do not obviously indicate it). See the CloudFlare announcement for better information on
viewing pushed resources.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clients can push multiple resources&lt;/li&gt;
&lt;li&gt;Resources are fetched by using the appropriate http handler, therefore static assets as well as dynamic assets can be pushed&lt;/li&gt;
&lt;li&gt;In this implementation, a handler can detect if a resource was pushed by checking for presence of the &lt;code&gt;h2push&lt;/code&gt; header. Note, this is not a trusted header, see discussion in implementation issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;pre&gt;&lt;code&gt;if _, ok := r.Header[&amp;quot;H2push&amp;quot;]; ok {
    // Most likely a pushed request (but a client could have forged this header)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;implementation-details:237cacd2f89dba420fdba263579103cc&#34;&gt;Implementation Details&lt;/h2&gt;

&lt;p&gt;You can quickly view the entire diff on &lt;a href=&#34;https://github.com/bradleyfalzon/net/compare/1aafd77e1e7f6849ad16a7bdeb65e3589a10b2bb...bradleyfalzon:master&#34;&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;HTTP/2 supports concurrent requests on a single TCP connection by multiplexing each request and response on their own stream.&lt;/p&gt;

&lt;p&gt;HTTP/2 has different request and response types called frames. For example, a &lt;code&gt;HEADERS&lt;/code&gt; frame contains either the request or response headers, a &lt;code&gt;DATA&lt;/code&gt; frame contains the body for either request or response, and there are other frames that are unique to HTTP/2 such as &lt;code&gt;SETTINGS&lt;/code&gt;, &lt;code&gt;WINDOW_UPDATE&lt;/code&gt; and &lt;code&gt;PUSH_PROMISE&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The HTTP/2 spec provides the implementation requirements and is available &lt;a href=&#34;http://httpwg.org/specs/rfc7540.html&#34;&gt;http://httpwg.org/specs/rfc7540.html&lt;/a&gt;, see the section on &lt;a href=&#34;http://httpwg.org/specs/rfc7540.html#PushResources&#34;&gt;Server Push&lt;/a&gt; for specifics.&lt;/p&gt;

&lt;p&gt;The first step is to detect which assets need to be pushed, this implementation detects resources to be pushed by the presence of a &lt;code&gt;Link&lt;/code&gt; header when processing the response. This is a simple API, but this is (in this implementation at least) only detected &lt;em&gt;after&lt;/em&gt; the initial response has been generated by the application. A server or proxy may immediately based on the request whether another resource should be pushed (such as the value or lack of a cookie, or if the resource is not cacheable), proxies may also wish to push resources before the initial response is ready. So future server implementations may chose other methods (discussed in closing notes).&lt;/p&gt;

&lt;p&gt;Note, this implementation&amp;rsquo;s use of the &lt;code&gt;Link&lt;/code&gt; header is semi-complete implementation of CloudFlare&amp;rsquo;s Server Push behaviour, whereby the application signals a resource to be pushed by setting a header containing the full path of the resource. A future implementation should either correctly follow the full &lt;code&gt;Link&lt;/code&gt; draft (&lt;a href=&#34;https://www.w3.org/Protocols/9707-link-header.html&#34;&gt;https://www.w3.org/Protocols/9707-link-header.html&lt;/a&gt;) or implement another API, with support for sending promises immediately without waiting for the request to finish processing.&lt;/p&gt;

&lt;p&gt;Once the server knows of a resource to push, it must first send a &lt;code&gt;PUSH_PROMISE&lt;/code&gt; frame to the client, before sending the response&amp;rsquo;s &lt;code&gt;HEADER&lt;/code&gt; or &lt;code&gt;DATA&lt;/code&gt; frame. See &lt;a href=&#34;http://httpwg.org/specs/rfc7540.html#PushRequests&#34;&gt;http://httpwg.org/specs/rfc7540.html#PushRequests&lt;/a&gt; for why this is the case.&lt;/p&gt;

&lt;p&gt;There are rules on what type of resources can be pushed, and importantly, clients can disable server push; promises may be suitable for a web browser, but not for server to server communication, user-agents such as &lt;code&gt;wget&lt;/code&gt; and &lt;code&gt;curl&lt;/code&gt;, or potentially resource (bandwidth, CPU or battery) constrained devices.&lt;/p&gt;

&lt;p&gt;These checks and detections can be seen: &lt;a href=&#34;https://github.com/bradleyfalzon/net/commit/e5fbdb8434a6c8ca5b358cee38d2acb0070d8fb1#diff-51f54e5e768ac5a5b2539aebaf738475R1972&#34;&gt;https://github.com/bradleyfalzon/net/commit/e5fbdb8434a6c8ca5b358cee38d2acb0070d8fb1#diff-51f54e5e768ac5a5b2539aebaf738475R1972&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once a resource has been chosen to be pushed, the &lt;code&gt;PUSH_PROMISE&lt;/code&gt; frame needs to be created and sent on the existing stream. The frame contains a new stream ID which the resources will be sent on, as well as a mock of the request headers a client would have sent.&lt;/p&gt;

&lt;p&gt;These mock headers gives the client an opportunity to reject the stream (by sending a &lt;code&gt;RST_STREAM&lt;/code&gt;), potentially because
the client already has the resource in its cache. Obviously some of the pushed resource may already have
been sent, so it&amp;rsquo;s important to send the promise as early as possible, and only if it&amp;rsquo;s very likely the client doesn&amp;rsquo;t
have the resource already, and the cost of addition network bandwidth (for both the client and server) doesn&amp;rsquo;t outweigh
the performance gain.&lt;/p&gt;

&lt;p&gt;Go&amp;rsquo;s &lt;code&gt;/x/net/http2&lt;/code&gt; didn&amp;rsquo;t originally contain the ability to create a stream, but by seeing how the existing &lt;a href=&#34;https://github.com/bradleyfalzon/net/blob/e5fbdb8434a6c8ca5b358cee38d2acb0070d8fb1/http2/server.go#L1349&#34;&gt;processHeaders&lt;/a&gt; method creates a new stream from new requests, it&amp;rsquo;s possible to implement a limited (and incomplete) &lt;a href=&#34;https://github.com/bradleyfalzon/net/blob/e5fbdb8434a6c8ca5b358cee38d2acb0070d8fb1/http2/server.go#L1726&#34;&gt;newStream&lt;/a&gt; method which sets up the internal state to support a new stream generated server-side. Future implementations will likely be able to remove some of the repetitive code.&lt;/p&gt;

&lt;p&gt;Note, streams created by clients must be odd numbered (all streams start at stream ID 1, ID 0 is reserved for control frames), and servers create even numbered streams. New stream IDs must be greater than the last maximum. So a client creating a new stream with the ID 2^31-1 is probably going to have a bad time very quickly.&lt;/p&gt;

&lt;p&gt;The newly created method responsible for building required frames, &lt;a href=&#34;https://github.com/bradleyfalzon/net/commit/e5fbdb8434a6c8ca5b358cee38d2acb0070d8fb1#diff-51f54e5e768ac5a5b2539aebaf738475R2027&#34;&gt;writePromise&lt;/a&gt;, modifies the &lt;code&gt;http2.serverConn&lt;/code&gt; state - and it likely violates responsibility. See the TODO above the function definition for more information. This will likely cause issues but needs further investigation.&lt;/p&gt;

&lt;p&gt;To send the &lt;code&gt;PUSH_PROMISE&lt;/code&gt; frame a new writer struct was created &lt;a href=&#34;https://github.com/bradleyfalzon/net/commit/e5fbdb8434a6c8ca5b358cee38d2acb0070d8fb1#diff-406c8d245998ec8faa9e4fa45fb6d426R85&#34;&gt;in write.go&lt;/a&gt; which simply sends the frame.&lt;/p&gt;

&lt;p&gt;This implementation&amp;rsquo;s generation of request headers only includes a minimal set of headers for the request to succeed (method, scheme, authority and path), it does not include Cookies, Accept etc. This is simply an implementation issue and will need to be fixed in production implementations.&lt;/p&gt;

&lt;p&gt;Once the &lt;code&gt;PUSH_PROMISE&lt;/code&gt; frame has been generated, and new set of request headers is generated (more repetition here)
that will be sent to the http handler responsible for writing to the usual &lt;code&gt;http.ResponseWriter&lt;/code&gt; - which in turn causes the
response &lt;code&gt;HEADER&lt;/code&gt; and &lt;code&gt;DATA&lt;/code&gt; frames to be sent on the newly created stream ID.&lt;/p&gt;

&lt;p&gt;The implementation currently available was designed to contain the minimum number of changes without refactoring the
entire package - and production implementation will unlikely make this trade off as the developer will likely also have
given themselves more than a weekend and already be familiar with the code.&lt;/p&gt;

&lt;p&gt;To recap, the steps required are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detect assets to push&lt;/li&gt;
&lt;li&gt;Ensure client has not disabled server push, and the assets can be sent&lt;/li&gt;
&lt;li&gt;Create a new stream to send the response on&lt;/li&gt;
&lt;li&gt;Create a fake set of request headers to send in &lt;code&gt;PUSH_PROMISE&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Send a &lt;code&gt;PUSH_PROMISE&lt;/code&gt; frame on the existing stream and before the DATA of the initial request, this frame contains:

&lt;ul&gt;
&lt;li&gt;new stream ID which will be used to send the headers and data of the resource&lt;/li&gt;
&lt;li&gt;faked request headers allowing the client to reject the resource (some data may already have been sent)&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;Call the appropriate http handler to write the response to the &lt;code&gt;http.ResponseWriter&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Sends response&amp;rsquo;s &lt;code&gt;HEADER&lt;/code&gt; and &lt;code&gt;DATA&lt;/code&gt; frames on the promised stream ID&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;implementation-issues:237cacd2f89dba420fdba263579103cc&#34;&gt;Implementation Issues&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;This is not a complete nor correct implementation of HTTP/2 Server Push, nor is it suitable as a candidate for production implementation. Here be dragons.&lt;/li&gt;
&lt;li&gt;A better implementation will likely focus on refactoring &lt;code&gt;/x/net/http2&lt;/code&gt; as well as possibly using the serve loop to listen for promises to be created (to safely mutate the &lt;code&gt;http2.serverConn&lt;/code&gt; struct when creating streams).&lt;/li&gt;
&lt;li&gt;This implementation does not handle fragmented headers, this is unlikely a problem in toy servers, but a requirement for a production implementation, however, it should currently send headers up to approximately 16K in size correctly.&lt;/li&gt;
&lt;li&gt;This likely fails existing unit tests and does not include new nor does it update existing tests.&lt;/li&gt;
&lt;li&gt;Doesn&amp;rsquo;t send promises for &lt;code&gt;HEAD&lt;/code&gt; requests, this is by design as no testing has been done, but the HTTP/2 specification does not forbid it.&lt;/li&gt;
&lt;li&gt;It currently adds a &amp;ldquo;h2push: true&amp;rdquo; header to pushed requests, this is only to enable the demo to differentiate pushed requests from standard requests, a production implementation would need to reconsider this approach as the headers are obviously easily forged by clients. The &lt;code&gt;http.Request&lt;/code&gt; struct could be modified to include a Promised (or similar) bool field.&lt;/li&gt;
&lt;li&gt;Only minimal http request headers are sent to the relevant http handlers, headers such as Cookie, User-Agent, Accept* etc are not current sent.&lt;/li&gt;
&lt;li&gt;This does not check if any additional streams exceed the negotiated maximums.&lt;/li&gt;
&lt;li&gt;This fork will not likely be updated from upstream nor maintained to a satisfactory level.&lt;/li&gt;
&lt;li&gt;There maybe an issue with requesting the same pushed resource multiple times on the same TCP connection, more testing required.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;closing-notes:237cacd2f89dba420fdba263579103cc&#34;&gt;Closing Notes&lt;/h2&gt;

&lt;p&gt;I would love to explore the possibility of implementing a cache based on HTTP/2 Server Push, either in a dedicated
reverse proxy (similar to &lt;code&gt;mod_pagespeed&lt;/code&gt;) or via a HTTP/2 Server such as Caddy via a plugin.&lt;/p&gt;

&lt;p&gt;More investigation in exactly where Server Push is most beneficial needs to occur. HTTP/2 already provides great benefits via
its multiplexing, perhaps pushed resources should only focus on pushing resources with the aim to reduce first paint events
(style sheets) and to push critical scripts required for rendering.&lt;/p&gt;

&lt;p&gt;Applications &lt;em&gt;may&lt;/em&gt; need to ensure they don&amp;rsquo;t push resources in response to requests generated via &lt;code&gt;XMLHttpRequest&lt;/code&gt; or
similar methods.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Link&lt;/code&gt; header method may not be the absolute best API for all applications, as mentioned already, it&amp;rsquo;s important for
some promises to be sent very early, and using response header methods would require them to be processed before the
request is finished processing or flushed. Response headers may provide a good method for intermediate proxies (which explains
CloudFlare&amp;rsquo;s support).&lt;/p&gt;

&lt;p&gt;Detecting when a resource is being push may also have it&amp;rsquo;s benefits (it&amp;rsquo;s not clear whether or how CloudFlare provides this).&lt;/p&gt;

&lt;h2 id=&#34;thanks:237cacd2f89dba420fdba263579103cc&#34;&gt;Thanks&lt;/h2&gt;

&lt;p&gt;I would like to thank the Go authors for their initial implementation (you know who you are) and I look forward to
a production implementation in later Go versions.&lt;/p&gt;

&lt;p&gt;Thanks for SPDY team and those involved in the IETF WG for their work on HTTP/2 and supporting Server Push.&lt;/p&gt;

&lt;p&gt;A big thanks to CloudFlare for giving me the idea for this weekend project, your open source contributions and staff are
always inspirational.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Linux backups using Google&#39;s Nearline Storage</title>
      <link>https://bradleyf.id.au/nix/google-storage-nearline-linux-backups/</link>
      <pubDate>Sun, 22 Mar 2015 10:01:28 +1030</pubDate>
      
      <guid>https://bradleyf.id.au/nix/google-storage-nearline-linux-backups/</guid>
      <description>

&lt;h1 id=&#34;overview:873988bfed98151a192768d7ecdbd93f&#34;&gt;Overview&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Sign up to Google Cloud Platform&lt;/li&gt;
&lt;li&gt;Install &lt;code&gt;gsutil&lt;/code&gt; command&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;gsutil&lt;/code&gt; to create our API credentials&lt;/li&gt;
&lt;li&gt;Create our Nearline Bucket&lt;/li&gt;
&lt;li&gt;Do an initial sync using &lt;code&gt;gzutil&lt;/code&gt;&amp;rsquo;s rsync method&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&#34;sign-up-and-initial-configuration:873988bfed98151a192768d7ecdbd93f&#34;&gt;Sign up and initial configuration&lt;/h1&gt;

&lt;p&gt;Sign up to Google Cloud Platform at &lt;a href=&#34;https://cloud.google.com/&#34;&gt;https://cloud.google.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should be redirected to console.developers.google.com, your first step is to create a project. I&amp;rsquo;ve called mine
&lt;code&gt;Linux Backup&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Don&amp;rsquo;t create any storage buckets or additional credentials yet, we&amp;rsquo;ll create them via &lt;code&gt;gsutil&lt;/code&gt;.&lt;/p&gt;

&lt;h1 id=&#34;install-gsutil:873988bfed98151a192768d7ecdbd93f&#34;&gt;Install gsutil&lt;/h1&gt;

&lt;p&gt;See Also: &lt;a href=&#34;https://cloud.google.com/storage/docs/gsutil_install&#34;&gt;https://cloud.google.com/storage/docs/gsutil_install&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My instructions install gsutil to /usr/local/gsutil, change this path to any path you prefer.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;# wget https://storage.googleapis.com/pub/gsutil.tar.gz
# tar xzf gsutil.tar.gz -C /usr/local/
# echo &#39;PATH=$PATH:/usr/local/gsutil&#39; &amp;gt; /etc/profile.d/gsutil.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;h1 id=&#34;configure-gsutil:873988bfed98151a192768d7ecdbd93f&#34;&gt;Configure gsutil&lt;/h1&gt;

&lt;p&gt;See Also: &lt;a href=&#34;https://cloud.google.com/storage/docs/gsutil_install#authenticate&#34;&gt;https://cloud.google.com/storage/docs/gsutil_install#authenticate&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Running the &lt;code&gt;gsutil config&lt;/code&gt; will provide a URL, open this URL in a browser and login using the Google account you&amp;rsquo;d like
to use for storage and billing. You&amp;rsquo;ll be asked to authorise the request and once completed the website will give you a
token to copy and paste back into &lt;code&gt;gsutil&lt;/code&gt;. It&amp;rsquo;s straight forward.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;# gsutil config
This command will create a boto config file at /root/.boto containing
your credentials, based on your responses to the following questions.
Please navigate your browser to the following URL:
https://accounts.google.com/o/oauth2/&amp;lt;snip&amp;gt;
In your browser you should see a page that requests you to authorize access to Google Cloud Platform APIs and Services
on your behalf. After you approve, an authorization code will be displayed.

Enter the authorization code: &amp;lt;some key&amp;gt;

Please navigate your browser to https://cloud.google.com/console#/project,
then find the project you will use, and copy the Project ID string from the
second column. Older projects do not have Project ID strings. For such projects,
click the project and then copy the Project Number listed under that project.

What is your project-id? symmetric-index-&amp;lt;some id&amp;gt;

Boto config file &amp;quot;/root/.boto&amp;quot; created. If you need to use a proxy to
access the Internet please see the instructions in that file.
&lt;/code&gt;&lt;/pre&gt;

&lt;h1 id=&#34;create-backups:873988bfed98151a192768d7ecdbd93f&#34;&gt;Create Backups&lt;/h1&gt;

&lt;p&gt;Create a bucket, see &lt;code&gt;gsutil help mb&lt;/code&gt; to see a complete list of options, such as specifying bucket region.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;# gsutil mb -c nearline gs://&amp;lt;bucket_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, perform your initial rsync&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;# gsutil -m rsync -r /&amp;lt;directory&amp;gt; gs://&amp;lt;bucket_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;-m&lt;/code&gt; option runs a parallel rsync&lt;/p&gt;

&lt;p&gt;For future backups, use the &lt;code&gt;-q&lt;/code&gt; option to hide all output but errors, this is useful for cron, so it will only email if
an error occurs.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;# gsutil -qm rsync -r /&amp;lt;directory&amp;gt; gs://&amp;lt;bucket_name&amp;gt;
``

Faster CRC32 Checksums
======================

Note, by default it&#39;s likely rsync will use a slow method to calcualte CRC32 checksums. For a faster method it&#39;s
recommended to

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;$ gsutil ver -l | grep crcmod
```&lt;/p&gt;

&lt;p&gt;If the output shows &lt;code&gt;compiled crcmod: False&lt;/code&gt;, then install the compiled module by following the instructions in &lt;code&gt;gsutil
help crc32c&lt;/code&gt; - which essentially uses &lt;code&gt;pip&lt;/code&gt; to install &lt;code&gt;crcmod32&lt;/code&gt;.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Git, Push to Deploy &amp; Hugo</title>
      <link>https://bradleyf.id.au/nix/git-push-deploy-hugo/</link>
      <pubDate>Sun, 11 Jan 2015 14:26:45 +1030</pubDate>
      
      <guid>https://bradleyf.id.au/nix/git-push-deploy-hugo/</guid>
      <description>

&lt;h1 id=&#34;overview:52bec9cedebb497ed4f2fb4787cb8267&#34;&gt;Overview&lt;/h1&gt;

&lt;p&gt;Push to deploy is a mechanism to automate building and deploying code within a version control system (VCS) to a
staging or production server. In this case, we&amp;rsquo;ll
be using Git for our VCS, Git&amp;rsquo;s &lt;code&gt;post-receive&lt;/code&gt; hooks for automating the builds and &lt;a href=&#34;http://gohugo.io/&#34;&gt;Hugo&lt;/a&gt; to build
the blog itself.&lt;/p&gt;

&lt;p&gt;This assumes you want two remotes for Git, one for your VCS purposes and the other  a production server for deployments.
You can expand the logic here to include a staging server or a more elaborate deployment to multiple servers. Of
course, there&amp;rsquo;s other tools more suited to proper deployment processes.&lt;/p&gt;

&lt;p&gt;Alternatively this design can be adjusted to support pushing to a continuous integration (CI) server that can be
tasked to build environments, run tests deploy code to multiple servers.&lt;/p&gt;

&lt;p&gt;Although this processes contains references to Hugo, this process can be used to deploy any code, however, as this is
only designed to deploy a simple blog on the &lt;code&gt;master&lt;/code&gt; branch, your mileage may vary.&lt;/p&gt;

&lt;h1 id=&#34;steps:52bec9cedebb497ed4f2fb4787cb8267&#34;&gt;Steps&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;Start with an existing repository with a remote called origin pointing to your favourite VCS (eg personal server,
GitLab, GitHub / BitBucket etc).&lt;/li&gt;
&lt;li&gt;On your production server, create a repo that will be used for hosting your live content.&lt;/li&gt;
&lt;li&gt;Add post receive hook to the live remote which checks out master, runs Hugo and cuts over blog.&lt;/li&gt;
&lt;li&gt;Add the new live remote to existing repo.&lt;/li&gt;
&lt;li&gt;Push to live remote.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&#34;starting-point:52bec9cedebb497ed4f2fb4787cb8267&#34;&gt;Starting Point&lt;/h2&gt;

&lt;p&gt;For this example, we&amp;rsquo;re starting with a straight forward git repo with only one remote.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ git remote -v
origin  git@bitbucket.org:bradleyfalzon/bradleyf-blog.git (fetch)
origin  git@bitbucket.org:bradleyfalzon/bradleyf-blog.git (push)
$ git branch
* master
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You can see I store my blog content on Atlassian&amp;rsquo;s &lt;a href=&#34;https://bitbucket.org/&#34;&gt;BitBucket&lt;/a&gt; service. I&amp;rsquo;ve checked it out
locally and I&amp;rsquo;m on the master branch. From now on, I&amp;rsquo;ll assume you&amp;rsquo;re not looking at tracking remote branches, but this
process would only require minor modifications in that case.&lt;/p&gt;

&lt;h2 id=&#34;configure-production-remote:52bec9cedebb497ed4f2fb4787cb8267&#34;&gt;Configure Production Remote&lt;/h2&gt;

&lt;p&gt;Here we&amp;rsquo;ll need to:&lt;/p&gt;

&lt;p&gt;First we&amp;rsquo;ll too create a new bare repo on the server. An alternative technique is available in &lt;a href=&#34;http://git-scm.com/book/en/v2/Git-on-the-Server-Getting-Git-on-a-Server&#34;&gt;git-scm
book&lt;/a&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ cd /data/git
$ mkdir bradleyf-blog.git
$ cd bradleyf-blog.git
$ git init --bare
Initialized empty Git repository in /data/git/bradleyf-blog.git/
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This has created an empty git repo with no working tree (thanks to &lt;code&gt;--bare&lt;/code&gt;). A simple &lt;code&gt;git clone&lt;/code&gt; would&amp;rsquo;ve create a
working tree and checked out the &lt;code&gt;master&lt;/code&gt; branch, which would stop a client from pushing to it whilst that branch is checked out.&lt;/p&gt;

&lt;h2 id=&#34;configure-post-receive-hooks:52bec9cedebb497ed4f2fb4787cb8267&#34;&gt;Configure Post Receive Hooks&lt;/h2&gt;

&lt;p&gt;With our new (empty) git repo on the server, we need to configure our &lt;code&gt;post-receive&lt;/code&gt; hook that will be executed
after a push is successful. Whilst this hook is running, it will block the client from disconnecting, so don&amp;rsquo;t run anything too slow or background
slow running scripts.&lt;/p&gt;

&lt;p&gt;The post receive hook is stored in &lt;code&gt;hooks/post-receive&lt;/code&gt;, and must be executable. Use hashbangs (&lt;code&gt;#!&lt;/code&gt;) to specify
which interpreter to execute for your script, in this case it&amp;rsquo;s BASH.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We&amp;rsquo;re using &lt;code&gt;post-receive&lt;/code&gt; hooks, alternatively you could use a &lt;code&gt;pre-receive&lt;/code&gt; hook which will give you the ability
to exit with a non-zero status to indicate a failure and reject the push. This could be used to execute simple
tests before accepting a push from a client.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Our &lt;code&gt;post-receive&lt;/code&gt; hook looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;#!/bin/bash

# SYMDIR will be a symlink pointing to the current version
# of the content. In my case nginx has this directories public
# dir as the document_root.
SYMDIR=/data/www/bradleyf.id.au

# When there&#39;s failures, send emails to this address
EMAIL=user@example.com

# Store all logs here, being overriden each deploy
LOG=/tmp/blog-deploy.log

# Tell check that this other directory is the working tree, so
# checkout the content to this directory
GIT_WORK_TREE=$SYMDIR-`date +&amp;quot;%s&amp;quot;`

export GIT_WORK_TREE

# Simple checkErrors function, the first argument is a string to write to log
# if something happens.
function checkErrors() {
        if [ &amp;quot;$?&amp;quot; != &amp;quot;0&amp;quot; ]; then
                echo $1 &amp;gt;&amp;gt; $LOG
                cat $LOG | mail -s &amp;quot;Git deploy problems&amp;quot; $EMAIL
                exit 1
        fi
}

date &amp;gt; $LOG

# Create our working tree
rm -rf $GIT_WORK_TREE &amp;amp;&amp;gt; /dev/null
mkdir $GIT_WORK_TREE &amp;amp;&amp;gt;&amp;gt; $LOG
checkErrors &amp;quot;Could not mkdir $GIT_WORK_TREE&amp;quot;

# Checkout master to the working tree directory
git checkout -f master &amp;amp;&amp;gt;&amp;gt; $LOG
checkErrors &amp;quot;Could not checkout master to $GIT_WORK_TREE&amp;quot;

cd $GIT_WORK_TREE &amp;amp;&amp;gt;&amp;gt; $LOG
checkErrors &amp;quot;Could not change directory to $GIT_WORK_TREE&amp;quot;

# Run hugo to build the content and store in public/
hugo &amp;amp;&amp;gt;&amp;gt; $LOG
checkErrors &amp;quot;Could not use hugo to build blog&amp;quot;

# Atomically cut over the old working tree to the new, note
# using ln may not be the best method and mv should be considered.
ln -sfn $GIT_WORK_TREE $SYMDIR &amp;amp;&amp;gt;&amp;gt; $LOG
checkErrors &amp;quot;Could not create sym link from $GIT_WORK_TREE to $SYMDIR&amp;quot;

# Remove old versions (to revert use git-revert)
find $SYMDIR-* -maxdepth 0 -type d | grep -v $GIT_WORK_TREE | xargs rm -rf
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note, this is the first revision of this script your environment may require different options or flows depending
on your use case.&lt;/p&gt;

&lt;h2 id=&#34;configure-clients:52bec9cedebb497ed4f2fb4787cb8267&#34;&gt;Configure Clients&lt;/h2&gt;

&lt;p&gt;Once the server&amp;rsquo;s been configured, all clients will need to add this server as a remote. In my case, I&amp;rsquo;ve left
BitBucket as my origin and added the production server, &lt;code&gt;bradleyf.id.au&lt;/code&gt;, as a remote called &lt;code&gt;live&lt;/code&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ git remote add live root@bradleyf.id.au:/data/git/bradleyf-blog.git
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&#34;push-to-production:52bec9cedebb497ed4f2fb4787cb8267&#34;&gt;Push to Production&lt;/h2&gt;

&lt;p&gt;Now a quick &lt;code&gt;git push live&lt;/code&gt; would push my content to the production server and git&amp;rsquo;s &lt;code&gt;post-receive&lt;/code&gt; hook will build the content and deploy for me (or email me if there&amp;rsquo;s a problem).&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ git push live
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I wanted to manually push to &lt;code&gt;live&lt;/code&gt; so I could control when I&amp;rsquo;m pushing as to whether it&amp;rsquo;s going into my VCS/origin (default) or live production server.&lt;/p&gt;

&lt;p&gt;You must remember to push to both remotes when you&amp;rsquo;re ready, one for VCS, one for live. To make this two step process
one, I manually created a third remote called &lt;code&gt;all&lt;/code&gt;, so I could &lt;code&gt;git push all&lt;/code&gt; which would push to the &lt;code&gt;origin&lt;/code&gt;
remote and &lt;code&gt;live&lt;/code&gt; remote in one command.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ tail -n 3 .git/config
[remote &amp;quot;all&amp;quot;]
    url = git@bitbucket.org:bradleyfalzon/bradleyf-blog.git
    url = root@bradleyf.id.au:/data/git/bradleyf-blog.git
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In total, I have three options when I push:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ git push
    - Push to origin remote only
$ git push live
    - Push to live production server only
$ git push all
    - Push to both BitBucket and production server
&lt;/code&gt;&lt;/pre&gt;

&lt;h1 id=&#34;additional-tips:52bec9cedebb497ed4f2fb4787cb8267&#34;&gt;Additional Tips&lt;/h1&gt;

&lt;p&gt;Show the &lt;em&gt;committed&lt;/em&gt; differences between current &lt;code&gt;master&lt;/code&gt; and the remote &lt;code&gt;live&lt;/code&gt;.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;git diff master..live/master
&lt;/code&gt;&lt;/pre&gt;
</description>
    </item>
    
    <item>
      <title>Shaving your RTT with TCP Fast Open</title>
      <link>https://bradleyf.id.au/nix/shaving-your-rtt-wth-tfo/</link>
      <pubDate>Sat, 03 Jan 2015 12:13:48 +1030</pubDate>
      
      <guid>https://bradleyf.id.au/nix/shaving-your-rtt-wth-tfo/</guid>
      <description>

&lt;h1 id=&#34;tl-dr:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;TL;DR&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;TCP Fast Open (TFO) allows clients to send data in the initial SYN request, without waiting for a full handshake to
occur. This removes an entire round trip &lt;em&gt;almost&lt;/em&gt; transparently from the application.&lt;/li&gt;
&lt;li&gt;Cookies are included in the TCP options header.&lt;/li&gt;
&lt;li&gt;It&amp;rsquo;s available in Linux 3.7+, nginx and HAProxy has support already. Client support is lacking with only
Chrome/Chromium on Linux, ChromeOS and Android 5.0 (Lollipop), and only if enabled manually.&lt;/li&gt;
&lt;li&gt;There&amp;rsquo;s some potential issues with delivering duplicate data to the server&amp;rsquo;s application, which wouldn&amp;rsquo;t occur in normal
TCP, although it is unlikely, some applications may not be compatible.

&lt;ul&gt;
&lt;li&gt;It&amp;rsquo;s perfect for static content (CDNs),&lt;/li&gt;
&lt;li&gt;Every website which uses TLS (it&amp;rsquo;ll reduce that handshake time).&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&#34;overview:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Overview&lt;/h1&gt;

&lt;p&gt;Standard TCP requires the client and server to establish a three way handshake (3WHS) before data can be delivered to the
server&amp;rsquo;s listening application. This introduces latency of one round trip before the server&amp;rsquo;s application receives the data, and a total
of two round trips by the time the server can respond. This can be significant for latency sensitive applications such as HTTP.&lt;/p&gt;

&lt;p&gt;TCP Fast Open (TFO), defined by &lt;a href=&#34;https://datatracker.ietf.org/doc/rfc7413/&#34;&gt;RFC 7413&lt;/a&gt;, still requires an initial
3WHS to be established before data can be sent for the first time. During this handshakes first SYN packet, the client can request
a TFO cookie, and a compatible server responds with a cookie in its SYN-ACK response.&lt;/p&gt;

&lt;p&gt;Once the client receives the SYN-ACK response it caches the cookie, responds with the application
data (such as &lt;code&gt;HTTP GET&lt;/code&gt;, or if it&amp;rsquo;s TLS, &lt;code&gt;ClientHello&lt;/code&gt;) as per normal. At least two round trips were required, one for the connection setup (initial SYN and
SYN-ACK) and the other for the application request and response.&lt;/p&gt;

&lt;p&gt;But now the client has a TFO cookie, a new connection&amp;rsquo;s SYN packet sent to the same server now also includes the application data. The
server validates the cookie and responds to the client with a SYN-ACK whilst also immediately sending the data to the
server&amp;rsquo;s listening application for a response. Reducing the total round trips to one.&lt;/p&gt;

&lt;h1 id=&#34;support:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Support&lt;/h1&gt;

&lt;h2 id=&#34;servers:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Servers&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Linux 3.7+ e.g. Red Hat Enterprise Linux (RHEL/CentOS) 7, Ubuntu 14.04 LTS&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://nginx.org/en/docs/http/ngx_http_core_module.html#listen&#34;&gt;nginx 1.5.8&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;nginx provided RPMs do not have support, other packages may have support&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.1-tfo&#34;&gt;HAProxy 1.5&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;clients:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Clients&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Chrome/Chromium on Linux, Chrome OS or Android (not sure if Chrome flags has it disabled, and if not, whether users need Linux 3.13 to have
it enabled by default)&lt;/li&gt;
&lt;li&gt;Linux 3.6+ e.g. Red Hat Enterprise Linux (RHEL/CentOS) 7, Fedora 18, Amazon Linux AMI 2014.03, Ubuntu 13.04&lt;/li&gt;
&lt;li&gt;Chrome OS&lt;/li&gt;
&lt;li&gt;Android Lollipop (5.0)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;websites:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Websites&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Google&amp;rsquo;s assets such as Google Search, YouTube, Blogger, DoubleClick etc&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&#34;detailed-process:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Detailed Process&lt;/h1&gt;

&lt;h2 id=&#34;sending-tfo-request:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Sending TFO Request&lt;/h2&gt;

&lt;p&gt;During the initial TCP connection, a client requests the kernel create a TCP connection with the TFO options
enabled. In Linux, a client normally uses the &lt;code&gt;connect()&lt;/code&gt; and &lt;code&gt;write()&lt;/code&gt; system calls to create and send on a TCP
connection. But to use TFO, these calls need to be combined - to allow application data to also be sent during the
initial connection phase. Therefore, not
only does the TCP stack the application is running on need to support TFO, but clients must also be written to support
this.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;API changes required to the application are more fully described in LWN&amp;rsquo;s article &lt;a href=&#34;http://lwn.net/Articles/508865/&#34;&gt;TCP Fast Open: expediting web
services&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The SYN packet generated by the clients TCP stack will set the &lt;a href=&#34;http://www.iana.org/assignments/tcp-parameters/tcp-parameters.xhtml#tcp-parameters-1&#34;&gt;TCP Option
34&lt;/a&gt; which corresponds to TCP Fast
Open Cookie. Because no TFO connections have been established, the cookie is empty - indicating to the server the client
supports TFO and would like a cookie. There is also no data within this SYN packet.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A SYN packet&amp;rsquo;s maximum TCP options length is 40 bytes, a Linux 3.18 kernel currently uses 20 (MSS 4 bytes, TCP SACK 2
bytes, Timestamps 10 bytes, NOP 1 byte and Window Scale 3 bytes). Adding TFO cookie increases this to 32. The &lt;a href=&#34;http://tools.ietf.org/html/rfc7413#section-4.2.2&#34;&gt;TFO RFC
states&lt;/a&gt; if the SYN packet does not have enough space to fit the TFO
options, disable TFO for this connection. Note, it doesn&amp;rsquo;t state this for SYN-ACK replies, but one would assume this would
also be true.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&#34;receiving-a-tfo-request:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Receiving a TFO Request&lt;/h2&gt;

&lt;p&gt;During transit, some middleware may not understand this TCP option and instead drop the packet. In which case, the client should
retransmit the SYN packet without a TFO option - i.e. attempt a standard TCP connection instead. The client should also &lt;a href=&#34;http://tools.ietf.org/html/rfc7413#section-4.1.3.1&#34;&gt;cache the
negative response&lt;/a&gt;, so future connections will not need to wait for
a timeout to this destination, and instead try a normal TCP connection the first time.&lt;/p&gt;

&lt;p&gt;The server application, much like the clients, must also be written to support TFO. In the server&amp;rsquo;s case, it must set
the &lt;code&gt;TFO_FASTOPEN&lt;/code&gt; socket option via the &lt;code&gt;setsockopt()&lt;/code&gt; system call and provide a maximum queue length (discussed later
in DoS Amplification Attacks).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An example client and server, using system calls in Go can be found &lt;a href=&#34;https://github.com/bradleyfalzon/tcp-fast-open&#34;&gt;github.com/bradleyfalzon/tcp-fast-open&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When the server&amp;rsquo;s TCP stack receives the SYN packet, if it does not support TFO it will ignore the TFO option
and establish a normal TCP connection. If the server supports TFO, and the listening socket has requested TFO support,
it will generate a message authentication code (MAC) of the client&amp;rsquo;s IP address (Linux also includes the server&amp;rsquo;s IP
address) using the server&amp;rsquo;s secret key.&lt;/p&gt;

&lt;p&gt;If the received SYN packet does not contain a cookie, but contains TFO options, the generated cookie is sent to the client in the
server&amp;rsquo;s SYN-ACK packet, using the same TFO option space and a normal TCP connection is established without any round
trip savings.&lt;/p&gt;

&lt;h2 id=&#34;receiving-a-tfo-request-response:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Receiving a TFO Request Response&lt;/h2&gt;

&lt;p&gt;When the client receives the SYN-ACK packet with the TFO cookie set, it caches this cookie, the server&amp;rsquo;s IP address,
and the server&amp;rsquo;s advertised maximum segment size (MSS). This cookie can now be used by the client when it tries to
connect to the same IP address with client TFO support enabled (note, the cookie is bound to the server&amp;rsquo;s IP and doesn&amp;rsquo;t
include the server&amp;rsquo;s destination port number).&lt;/p&gt;

&lt;h2 id=&#34;sending-tfo-data:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Sending TFO+Data&lt;/h2&gt;

&lt;p&gt;When a client tries to connect to the same server again with TFO socket options set, the initial TCP SYN packet will
be generated with the previously cached TFO cookie and application data up to the MSS size. If the application data exceeds the MSS size, additional data must wait to be sent until after the 3WHS is
established - negating the use of TFO.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The default MSS for IPv4 is only 536 bytes (IPv6 is 1220 bytes), where as the typical MSS is closer to
1460 bytes, which is why it&amp;rsquo;s important to cache the MSS - to try and fit all the data in the initial request. So, keep
request sizes within ~1460 bytes by reducing HTTP Cookie sizes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&#34;receiving-tfo-data:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Receiving TFO+Data&lt;/h2&gt;

&lt;p&gt;On the server&amp;rsquo;s side, the received SYN packet is checked for TFO cookies, and if so, the server generates a cookie. The
generated cookie is compared to the cookie in the SYN packet. If the comparison is successful, the server replies with a
SYN-ACK of the SYN + data and sends the data to the listening application. If the comparison fails, the server replies
with a SYN-ACK only acknowledging the SYN (not data) and includes the generated TFO cookie for the client to use next time.&lt;/p&gt;

&lt;h2 id=&#34;receiving-a-tfo-data-response:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Receiving a TFO+Data Response&lt;/h2&gt;

&lt;p&gt;When the client receives the SYN-ACK packet, it checks if the data was acknowledged, if it was it ACKs the SYN-ACK and
waits for further responses (if applicable) from the server.&lt;/p&gt;

&lt;p&gt;If the data was not acknowledged in the received SYN-ACK packet, the client still ACKs the server&amp;rsquo;s SYN-ACK, but also sends another packet
with the application data (as it would a normal TCP connection). If the received SYN-ACK packet contains a TFO cookie
the client will cache it for use next time.&lt;/p&gt;

&lt;p&gt;This system provides the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connections benefit from TFO only after an initial TCP connection is established&lt;/li&gt;
&lt;li&gt;Client application and kernel support is required, as well as server application and server kernel support&lt;/li&gt;
&lt;li&gt;Clients can handle most bad middleware by retransmitting with TFO&lt;/li&gt;
&lt;li&gt;Clients that suppport TFO that connect to servers that do not is handled gracefully by simply ignoring the TFO cookie request&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&#34;potential-issues:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Potential Issues&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Applications that receive the initial SYN data must be tolerant of duplication, in many cases the application is TLS
which is tolerant of duplicate data. Other applications that are not technically tolerant may simply accept the risk
if the impact is low (e.g. the risk of a forum website double posting may be acceptable, but purchasing all items in the
cart twice might not be).&lt;/li&gt;
&lt;li&gt;If a client needs to send more than the cached MSS, TFO is unavailable. For example, my PayPal cookies are currently
3298 bytes, well above the 1460 MSS currently advertised.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&#34;duplicate-data-idempotency:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Duplicate Data &amp;amp; Idempotency&lt;/h2&gt;

&lt;p&gt;Applications with TFO enabled sockets must be able to handle receiving duplicate data due to retransmitted SYN packets.
This is only an issue for the data within the initial SYN packet. Consecutive packets sent after the initial TCP
handshake (and non TFO TCP connections) detect duplicate data and drop the extra packets without delivery to the
application.&lt;/p&gt;

&lt;p&gt;Duplicate data is possible by at least two methods &lt;a href=&#34;http://tools.ietf.org/html/rfc7413#section-2.1&#34;&gt;outlined by the
RFC&lt;/a&gt;, and potentially and third discussed later.&lt;/p&gt;

&lt;p&gt;The first condition, where a server &amp;ldquo;forgets&amp;rdquo; it received the initial SYN packet (e.g. due to a server reboot), &lt;a href=&#34;https://lwn.net/Articles/509376/&#34;&gt;has been
discussed&lt;/a&gt; and two work arounds proposed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Delay enabling TFO by a few minutes&lt;/li&gt;
&lt;li&gt;Regenerate a new server key upon reboot, either randomly or based on a boot ID.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Solution 1 is a reasonable solution, with the obvious drawback of not having TFO enabled immediately and potentially(? -
need to check) causing a client to clear its TFO cookie cache. Delayed start could be managed by user land tools, by
simply disabling TFO by default and creating a service to enable TFO 5 minutes after server start (reboot).&lt;/p&gt;

&lt;p&gt;Solution 2 is already implemented in the Linux as the server key is randomly generated each reboot (earlier
versions before Linux 3.13 generated a new TFO key on boot, newer versions only once a socket sets the relevant
TFO socket option). However, the use of TFO in some load
balanced topologies, such as Direct Server Return (DSR), requires the servers to share the same TFO key, thereby allowing a
client to reuse the same cached cookie on any server in the farm. By randomly generating the server key each reboot,
all servers will need to change their key to the new key, which could reduce the effectiveness by needlessly increasing the amount of
key rotation - which could be significant on larger farms.&lt;/p&gt;

&lt;p&gt;The second and following conditions, have no obvious solutions provided by TFO itself, which means either TFO is
incompatible with the application or, as for the third condition, a change must be made to the higher level
architectures to support TFO.&lt;/p&gt;

&lt;p&gt;A third condition which can cause duplicate data, that wasn&amp;rsquo;t obviously mentioned
in the RFC, is a server farm where a retransmitted SYN packet arrives at a different server before the client receives
acknowledgement (SYN-ACK) of the first packet. This scenario requires the SYN-ACK packet to be dropped or delayed and
requires the load balanced topology to deliver the second SYN packet to a different server (either because no
persistence is configured or because the server had been removed from the load balanced farm). Just like non TFO TCP
connection, the server acknowledges the first SYN packet before it sends the application&amp;rsquo;s response, therefore, there&amp;rsquo;s
no risk of slow application processing cause a retransmit.&lt;/p&gt;

&lt;p&gt;A further note, one of the original developers of TFO states &amp;ldquo;today the client may send a non-idempotent request twice
already with standard TCP.&amp;rdquo; - although it might be possible already, it&amp;rsquo;s not clear how much TFO increases this risk.&lt;/p&gt;

&lt;h2 id=&#34;dos-amplification-attacks:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;DoS &amp;amp; Amplification Attacks&lt;/h2&gt;

&lt;p&gt;There is a risk of amplification attacks, as &lt;a href=&#34;http://tools.ietf.org/html/rfc7413#section-5.2&#34;&gt;described in more detail in the
RFC&lt;/a&gt;. If an attacker can steal a valid cookie for the victim&amp;rsquo;s IP
address, the attacker can generate a small request which may solicit a very large response sent directly to the victim.&lt;/p&gt;

&lt;p&gt;The RFC further describes protections by requiring the server to automatically disable TFO (for that socket) in the event
the TFO queue length exceeds an application defined max queue length. The max queue length defines the maximum number of
outstanding unacknowledged SYN-ACK packets. Server applications are required to set this on listening
sockets via the &lt;code&gt;setsockopt&lt;/code&gt; system call. Exceeding the max queue length value will cause new connections to ignore TFO cookies and
revert to a standard TCP handshake. The number of ignored TFO cookies can be monitored via the
&lt;code&gt;TCPFastOpenListenOverflow&lt;/code&gt; counter, it is not logged. See &lt;code&gt;tcp_fastopen_queue_check&lt;/code&gt; in &lt;a href=&#34;http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/ipv4/tcp_fastopen.c#n210&#34;&gt;Linux 3.18&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Because the attacker&amp;rsquo;s requests are received by the application, standard DoS detection mechanisms can easily detect
this, but in the case of TLS, the attacker can generate a small ClientHello request which will solicit a very large
response from the server (containing the server&amp;rsquo;s certificate as well as intermediate certificates). Because the attack
occurs at a lower level, within the TLS library, it&amp;rsquo;s less likely to be logged and may be unnoticed.&lt;/p&gt;

&lt;p&gt;During a DoS attack the victim will receive SYN-ACK packets with
TFO option bits set without seeing an earlier SYN request. A firewall or intrusion detection system (IDS) may be able
to detect this type of attack. The victim and server would also see RST packets from the victim as the packets are
rejected by the destination, providing additional clues of a possible TFO DoS attack.&lt;/p&gt;

&lt;p&gt;An attacker that has reliable means of accessing the victims network and able to successfully obtaining valid cookies
from a series of servers, maybe able to launch a larger or prolonged attack as each server will need to individually
detect that it&amp;rsquo;s involved in an attack and individually disable TFO. A victims network may therefore decide to remove
the TFO option from all outgoing SYN packets to prevent TFO on the network completely.&lt;/p&gt;

&lt;p&gt;The max queue length counter is the total number of outstanding SYN-ACK packets, i.e. it is per socket not per client.
Therefore when an attack is detected by the server, TFO is disabled for all clients (not just the victim). It may not be
desirable for a single attacker to disable TFO for all customers, although this won&amp;rsquo;t cause a denial of service, it will
disable the benefits of TFO and is trivially exploited.&lt;/p&gt;

&lt;h2 id=&#34;middleware-issues:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Middleware Issues&lt;/h2&gt;

&lt;p&gt;Some middleware, such as firewalls and NAT boxes may cause issues with the new TCP option. Additionally, because the
Linux continues to set the TFO option to 254, which is the experimental kind, it maybe more likely to be dropped.&lt;/p&gt;

&lt;p&gt;It&amp;rsquo;s even &lt;a href=&#34;https://code.google.com/p/chromium/issues/detail?id=271766#c5&#34;&gt;been reported&lt;/a&gt; some middleware boxes, after
detecting the TFO option in the initial SYN packet, drop subsequent SYN packets without the TFO option.&lt;/p&gt;

&lt;p&gt;Also, if a device is behind a Carrier Grade NAT (CGN) with many public IP addresses constantly changing, a cookie may become invalidated often, reducing the
effectiveness of TFO. High latency mobile devices which benefit the most from TFO are also most likely to be affected by
changing public IP addresses due to CGNs. I currently have no data on this.&lt;/p&gt;

&lt;h2 id=&#34;minor-linux-api-caveats:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Minor Linux API Caveats&lt;/h2&gt;

&lt;p&gt;A strong benefit of TFO is the ability to reduce the requirement for long running, persistent, TCP connections (such as HTTP servers
with keep alives). Persistent connections allows a client to send more data, at a later time, without needing to wait
for a 3WHS - but this connection is not free to maintain, costing both clients and servers and is further described in the &lt;a href=&#34;http://tools.ietf.org/html/rfc7413#section-6.3.3&#34;&gt;TFO
RFC&lt;/a&gt;. Although TFO can immediately reduce the requirement for persistent
connections (for connections established with TFO), Linux does not currently have an API for the application to determine
whether the connection was negotiated with TFO. Persistent connections would only be enabled for non TFO
connections - but because the application has no API to detect TFO connection, it cannot optionally disable persistent
connections. Mobile clients may also wish the disable, or shorten, persistent connections and/or browser TCP preconnects when TFO connections are available.&lt;/p&gt;

&lt;p&gt;Potentially the &lt;code&gt;getsockopt&lt;/code&gt; system call could be capable of providing this information, as it provides other socket
information such as whether the socket is listening, debugging etc.&lt;/p&gt;

&lt;p&gt;TFO has been assigned &lt;a href=&#34;http://www.iana.org/assignments/tcp-parameters/tcp-parameters.xhtml#tcp-parameters-1&#34;&gt;TCP option 34&lt;/a&gt;
by IANA, however, the Linux (3.18 at the time of writing) &lt;a href=&#34;http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/ipv4/tcp_input.c?id=b7392d2247cfe6771f95d256374f1a8e6a6f48d6#n3652&#34;&gt;currently uses the old experimental TCP option 254&lt;/a&gt;. This leads to an interesting thought, once the Linux supports the
new TCP option number, will it continue to support the old experimental option number? Current Linux servers, such as
Red Hat Enterprise Linux 7, and its derivatives, will continue to use set (in the case of a client) or check (in the
case of a server) the older option number. Will the kernel developers simply stop checking the experimental number,
legitimately breaking backwards compatibility - requiring a backport?&lt;/p&gt;

&lt;p&gt;Linux API does not provide a mechanism for key rotation, once a new key is generated, cookies generated with the old
key are immediately invalid. This benefit is briefly discussed in the RFC in &lt;a href=&#34;http://tools.ietf.org/html/rfc7413#section-4.1.2&#34;&gt;Server Cookie
Handling&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When a connection exceeds the &lt;code&gt;max_qlen&lt;/code&gt; (the maximum number of unacknowledged TFO requests, used to reduce DoS
attacks), log the event via &lt;code&gt;pr_notice&lt;/code&gt;, &lt;code&gt;printk&lt;/code&gt; or similar. Most people will probably never monitor the correct
counters to detect this event, but will likely monitor kernel messages.&lt;/p&gt;

&lt;h1 id=&#34;enabling-tfo-in-the-kernel:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Enabling TFO in the Kernel&lt;/h1&gt;

&lt;p&gt;Linux supports configuring both overall client and server support via &lt;code&gt;/proc/sys/net/ipv4/tcp_fastopen&lt;/code&gt;
(&lt;code&gt;net.ipv4.tcp_fastopen&lt;/code&gt; via sysctl). The options are a bit mask, where the first bit enables or disables client support
(default on), 2nd bit sets server support (default off), 3rd bit sets whether data in SYN packet is permitted without
TFO cookie option. Therefore a value of 1 TFO can only be enabled on outgoing connections (client only), value 2 allows
TFO only on listening sockets (server only), and value 3 enables TFO for both client and server.&lt;/p&gt;

&lt;p&gt;Note, even though these options maybe enabled, application level support must also be enabled.&lt;/p&gt;

&lt;p&gt;To enabled TFO and be persistent across reboots, you can use sysctl like the following:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;echo &#39;net.ipv4.tcp_fastopen=3&#39; &amp;gt; /etc/sysctl.d/50-tcp_fastopen.conf
sysctl -p /etc/sysctl.d/50-tcp_fastopen.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Linux (from version 3.13) generates a key when an application sets the relevant
&lt;code&gt;setsockopt&lt;/code&gt; syscall options for the first time. Until a key is set, the proc value is all zeros, so don&amp;rsquo;t be alarmed.&lt;/p&gt;

&lt;p&gt;By default, because the key does not persist between reboots, production use of TFO should include saving the key
securely (generating random keys, setting restrictive file permissions) via sysctl. This will ensure clients can use the existing cookie without needing a new key generated.&lt;/p&gt;

&lt;p&gt;To generate a new key and make persistent via sysctl:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;RAND=$(openssl rand -hex 16)
NEWKEY=${RAND:0:8}-${RAND:8:8}-${RAND:16:8}-${RAND:24:8}
echo &amp;quot;net.ipv4.tcp_fastopen_key=$NEWKEY&amp;quot; &amp;gt; /etc/sysctl.d/50-tcp_fastopen_key.conf
chmod 600 /etc/sysctl.d/50-tcp_fastopen_key.conf; chown root /etc/sysctl.d/50-tcp_fastopen_key.conf
sysctl -p /etc/sysctl.d/50-tcp_fastopen_key.conf
unset RAND NEWKEY
&lt;/code&gt;&lt;/pre&gt;

&lt;h1 id=&#34;monitoring:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Monitoring&lt;/h1&gt;

&lt;p&gt;To view client connection statistics for clients, &lt;code&gt;ip tcp_metrics&lt;/code&gt; is available from &lt;code&gt;iproute2&lt;/code&gt; in version
&lt;a href=&#34;http://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/commit/ip/tcp_metrics.c?id=ea63a69b6d2f230af5471ddfa7b05b369fc49816&#34;&gt;v3.7.0&lt;/a&gt;
(2012-12-11). This application can show you cached MSS as well as the TFO cookie used for a single IP or all IPs (exclude the
&lt;code&gt;show 127.0.0.1&lt;/code&gt; option in that case).&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;$ ip tcp_metrics show 127.0.0.1
127.0.0.1 age 93935.839sec rtt 875us rttvar 500us cwnd 10 fo_mss 65495 fo_cookie cec297e8b2723c29
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For both clients and servers the following counters are available in &lt;code&gt;/proc/net/netstat&lt;/code&gt;, for an easy overview you can use the following (adjusting
the column numbers if required):&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;grep &#39;^TcpExt:&#39; /proc/net/netstat | cut -d &#39; &#39; -f 87-92  | column -t
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;TCPFastOpenActive&lt;/code&gt; - number of successful outbound TFO connections.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;TCPFastOpenActiveFail&lt;/code&gt; - number of SYN-ACK packets received that did not acknowledge data sent in the SYN packet and
caused a retransmissions without SYN data. Note the original SYN packet contained a cookie + data, this is not number
of connections to servers that didn&amp;rsquo;t support TFO.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;TCPFastOpenPassive&lt;/code&gt; - number of successful inbound TFO connections.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;TCPFastOpenPassiveFail&lt;/code&gt; - number of inbound SYN packets with TFO cookie that was invalid.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;TCPFastOpenCookieReqd&lt;/code&gt; - number of inbound SYN packets requesting TFO with TFO set but no cookie&lt;/li&gt;
&lt;li&gt;&lt;code&gt;TCPFastOpenListenOverflow&lt;/code&gt; - number of inbound SYN packets that will have TFO disabled because the socket has
exceeded the max queue length.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additional options created that maybe useful:
&lt;a href=&#34;http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=f19c29e3e391a66a273e9afebaf01917245148cd&#34;&gt;commit&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;TCPSynRetrans&lt;/code&gt;: number of SYN and SYN/ACK retransmits to break down retransmissions into SYN, fast-retransmits,
timeout retransmits, etc.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;TCPOrigDataSent&lt;/code&gt;: number of outgoing packets with original data (excluding retransmission but including data-in-SYN).
This counter is different from &lt;code&gt;TcpOutSegs&lt;/code&gt; because &lt;code&gt;TcpOutSegs&lt;/code&gt; also tracks pure ACKs.  &lt;code&gt;TCPOrigDataSent&lt;/code&gt; is more
useful to track the TCP retransmission rate.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&#34;rotating-keys:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Rotating Keys&lt;/h1&gt;

&lt;p&gt;Linux provides &lt;code&gt;/proc/sys/net/ipv4/tcp_fastopen_key&lt;/code&gt; (&lt;code&gt;net.ipv4.tcp_fastopen_key&lt;/code&gt; via sysctl) which can be used to display the current key, as well as change the
key to a new key. The key is 16 bytes, expressed as 32 character hex string, broken into 4 8 character blocks, separated
by dashes.&lt;/p&gt;

&lt;p&gt;Rotation of keys can be achieved in exactly the same way we generated keys in sysctl.&lt;/p&gt;

&lt;p&gt;In a multi server server environment, you&amp;rsquo;ll want to randomly generate a key once, and set the same key on all servers.&lt;/p&gt;

&lt;h1 id=&#34;use-cases:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Use Cases&lt;/h1&gt;

&lt;h2 id=&#34;mobile-devices:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Mobile Devices&lt;/h2&gt;

&lt;p&gt;TFO, being designed to reduce latency by removing an entire round trip, benefits mobile devices that often have a higher
latency than other Internet connections provide.&lt;/p&gt;

&lt;p&gt;Table data below, provided by the &lt;a href=&#34;http://chimera.labs.oreilly.com/books/1230000000545/ch07.html#_brief_history_of_the_g_8217_s&#34;&gt;High Performance Browser
Networking&lt;/a&gt; book,
illustrates the typical latency for mobile networks.&lt;/p&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Generation&lt;/th&gt;
&lt;th&gt;Data Rate&lt;/th&gt;
&lt;th&gt;Latency&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2G&lt;/td&gt;
&lt;td&gt;100–400 Kbps&lt;/td&gt;
&lt;td&gt;300–1000 ms&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;3G&lt;/td&gt;
&lt;td&gt;0.5–5 Mbps&lt;/td&gt;
&lt;td&gt;100–500 ms&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;4G&lt;/td&gt;
&lt;td&gt;1–50 Mbps&lt;/td&gt;
&lt;td&gt;&amp;lt; 100 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;As discussed in other sections, a theoretical benefit of TFO could be to reduce request and response time to half (if
the response fits within TCP&amp;rsquo;s initial congestion window (initcwnd). A mobile device, with a latency of 500ms, would
require two round trips using a standard, non-TFO, TCP connection. Assuming there is no server processing time, this
connection would take 1000ms. With TFO, the first round trip is removed, so the connection time is now 500ms.&lt;/p&gt;

&lt;h2 id=&#34;static-site-cdn:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Static Site / CDN&lt;/h2&gt;

&lt;p&gt;Due to the possible, however unlikely, event of duplicate data, static content such as JavaScript, CSS, images, etc. have
no idempotent requirements and would greatly benefit from reduced latency benefits of TFO.&lt;/p&gt;

&lt;p&gt;Websites that are primarily static sites, or micro sites, have no idempotent requirements and therefore safe to use TFO.&lt;/p&gt;

&lt;p&gt;For this type of content, which is also less than 14,600 bytes, the response may often fit within the server&amp;rsquo;s TCP
initial congestion window (assuming a value of 10), the entire request and response time could be reduced by up to
50% (assuming the server has zero processing/fetching time).&lt;/p&gt;

&lt;h2 id=&#34;reducing-tls-latency:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Reducing TLS Latency&lt;/h2&gt;

&lt;p&gt;TLS introduces at least one additional round trip (for versions up to and including 1.2, TLS 1.3 addresses this), for
new connections, two round trips are required.&lt;/p&gt;

&lt;p&gt;Whether the TLS service supports session resumption via session tickets or session IDs, only the first TLS round trip
penalty is required which contains the ClientHello request and server responses (which can be sent in multiple packets).
New connections, or connections with expired/invalid tickets will have one more additional round trip.&lt;/p&gt;

&lt;p&gt;With TLS resumption, only one round trip in total would be required to setup the connection (the same amount as a normal TCP connection without
TFO and TLS). This works because the initial SYN packet will contain the ClientHello TLS data, as well as the session
resumption ticket or ID. The second packet will contain the application data, which will drop retransmitted data -
providing idempotence protection.&lt;/p&gt;

&lt;h2 id=&#34;dns-servers-using-tcp:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;DNS Servers using TCP&lt;/h2&gt;

&lt;p&gt;DNS primarily uses UDP for transactions. It originally defined the maximum packet size of 512 bytes. A client would
request a record with UDP, and if the response required a larger packet than 512 bytes, the response was truncated and a
bit set instructing the client to retry with TCP. This would require a total of 3 round trips.&lt;/p&gt;

&lt;p&gt;This was recognised and &lt;a href=&#34;http://tools.ietf.org/html/rfc2671&#34;&gt;Extension mechanisms for DNS&lt;/a&gt; (EDNS) was released in 1999,
and has since been superseded by &lt;a href=&#34;http://tools.ietf.org/html/rfc6891&#34;&gt;RFC6891&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;EDNS permits &lt;a href=&#34;http://tools.ietf.org/html/rfc6891#section-4.3&#34;&gt;UDP responses&lt;/a&gt; to exceed the 512 limit, and allows
responses to be sent over multiple UDP packets. Catering for protocols such as DNSSEC which may exceed this limit.&lt;/p&gt;

&lt;p&gt;Because of EDNS, TCP usage by DNS resolvers is uncommon (0.61% reported by a medium ISP&amp;rsquo;s DNS resolver) for most transactions.&lt;/p&gt;

&lt;p&gt;Regardless of when TCP maybe used, TFO can assist in reducing the round trips required for successful UDP downgrade to
TCP and subsequent 3WHS. But the real world benefits will be minimal.&lt;/p&gt;

&lt;h2 id=&#34;reducing-tor-latency:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Reducing Tor Latency&lt;/h2&gt;

&lt;p&gt;Additionally Tor is a TCP protocol with TLS that can use TFO. However, I haven&amp;rsquo;t had a chance to verify if, during the
initial Tor TLS negotiation, the client&amp;rsquo;s first request can fit within one packet (containing the clients certificates).
If it cannot, then the remaining data must be transmitted after the 3WHS, nullifying the benefits of TFO.&lt;/p&gt;

&lt;h2 id=&#34;multipath-tcp:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Multipath TCP&lt;/h2&gt;

&lt;p&gt;Multipath TCP (MTCP) is an effort to support multiple TCP connections in the connection, taking advantage of multiple links for
bandwidth and/or availability - all the while being transparent to the application.&lt;/p&gt;

&lt;p&gt;A currently draft RFC, &lt;a href=&#34;https://datatracker.ietf.org/doc/draft-barre-mptcp-tfo/&#34;&gt;draft-barre-mptcp-tfo&lt;/a&gt;, seeks to
address possible issues and inefficiencies with using MTCP and TFO concurrently. Among others, it provides suggestions
for the likely exhausted TCP option space (maximum limit 40 bytes) as both TFO and MTCP requires 12 bytes each, totally 24
bytes, with recent Kernels already using an additional 20 bytes.&lt;/p&gt;

&lt;h2 id=&#34;other-tcp-protocols:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Other TCP Protocols&lt;/h2&gt;

&lt;p&gt;TFO is applicable to any TCP connection that is sensitive to latency, not just HTTP connections. Other protocols such as
file servers (SAMBA, CIFS, NFS etc.) and potentially more could benefit from a reduce RTT.&lt;/p&gt;

&lt;h2 id=&#34;cookie-prefetching:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;Cookie Prefetching&lt;/h2&gt;

&lt;p&gt;Currently many browsers, such as Chrome/Chromium and Firefox, establish a TCP connection (TCP preconnect) to a server preemptively - before the user requests a
resource. The idea is to establish at least one TCP connection before a user requires it, when the connection is required
an already established connection can be used. Removing the TCP&amp;rsquo;s 3WHS for the request.&lt;/p&gt;

&lt;p&gt;There are a few minor issues with this is. First, a TCP connection must be established and held open, some load
balancers and/or web servers close an option connection that has not made a request after a short timeout, and records
this as an error. This causes additional logging and potentially wasted effort investigating
the errors. See &lt;a href=&#34;http://www.copernica.com/en/blog/how-chromes-pre-connect-breaks-haproxy-and-http&#34;&gt;HAProxy issues&lt;/a&gt;
and &lt;a href=&#34;https://bugzilla.mozilla.org/show_bug.cgi?id=733748&#34;&gt;Firefox issue&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another issue with this is that the TCP connection must be re-established constantly, whenever the browser thinks the
user is about to connect to the site.&lt;/p&gt;

&lt;p&gt;A separate mechanism could be employed using TFO. Whereby the browser preemptively establishes a TCP connection with TFO
options allowing the TCP stack to obtain a cookie. Future connections would not require a preconnect if a TFO cookie was
successfully obtained. Note however, the cookie may be invalid (server&amp;rsquo;s key has changed) or the source IP of the client
may change, therefore the browser may choose to initiate a preconnect after some period of time.&lt;/p&gt;

&lt;p&gt;Note, cookie prefetching would require an API for the application to detect whether a cookie was successfully obtained,
to know whether it can close the connection immediately and to not establish a preconnect in the future. No API is
currently available in Linux.&lt;/p&gt;

&lt;h1 id=&#34;history:cff22c5797f6109e97a8ce5147f4daa0&#34;&gt;History&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Feb 17th 2012

&lt;ul&gt;
&lt;li&gt;TCP Fast Open initial draft released.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;Sept 30th 2012

&lt;ul&gt;
&lt;li&gt;Linux 3.6 adds TFO client support.
&lt;a href=&#34;http://kernelnewbies.org/Linux_3.6#head-ac78950a7b57d92d5835642926f0e147c680b99c&#34;&gt;Commits&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;Dec 10th 2012

&lt;ul&gt;
&lt;li&gt;Linux 3.7 adds TFO server support.
&lt;a href=&#34;http://kernelnewbies.org/Linux_3.7#head-cd32b65674184083465d349ad6d772c828fbbd8b&#34;&gt;Commits&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;Jun 30th 2013

&lt;ul&gt;
&lt;li&gt;Linux 3.10 stops clearing cached TFO cookies when clearing other TCP metric caches.
&lt;a href=&#34;http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=efeaa5550e4bfd335396415958fe3615530e5d5c&#34;&gt;Commit&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;Nov 3rd 2013

&lt;ul&gt;
&lt;li&gt;Linux 3.12 encrypts server IP along with client IP when generating MAC.
&lt;a href=&#34;http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=149479d019e06df5a7f4096f95c00cfb1380309c&#34;&gt;Commit&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;Jan 19th 2014

&lt;ul&gt;
&lt;li&gt;Linux 3.13 enables TFO client support by default by changing the tcp_fastopen sysctl value from 0 to 1.
&lt;a href=&#34;http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=0d41cca490c274352211efac50e9598d39a9dc80&#34;&gt;Commit&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Linux 3.13 randomly generates TFO key only once a socket requests TFO, instead of each boot.
&lt;a href=&#34;http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=222e83d2e0aecb6a5e8d42b1a8d51332a1eba960&#34;&gt;Commit&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;Aug 3rd 2014

&lt;ul&gt;
&lt;li&gt;Linux 3.16 adds IPv6 TFO support.
&lt;a href=&#34;http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=3a19ce0eec32667b835d8dc887002019fc6b3a02&#34;&gt;Commit&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;Dec 18th 2014

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://datatracker.ietf.org/doc/rfc7413/&#34;&gt;RFC 7413&lt;/a&gt; is published&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Building the Linux Kernel</title>
      <link>https://bradleyf.id.au/nix/building-kernel/</link>
      <pubDate>Fri, 26 Dec 2014 08:46:02 +1030</pubDate>
      
      <guid>https://bradleyf.id.au/nix/building-kernel/</guid>
      <description>

&lt;h1 id=&#34;how-to-build-the-linux-kernel:963bb6f9b83ef4218a359746e4b5b032&#34;&gt;How to build the Linux Kernel&lt;/h1&gt;

&lt;p&gt;The following instructions are based on CentOS 7, these should work for most other distributions, however, the
initial dependencies will need to be reviewed.&lt;/p&gt;

&lt;p&gt;Unless prefixed with &lt;code&gt;sudo&lt;/code&gt;, all commands can be executed as a non-root user.&lt;/p&gt;

&lt;h1 id=&#34;why:963bb6f9b83ef4218a359746e4b5b032&#34;&gt;Why&lt;/h1&gt;

&lt;p&gt;There&amp;rsquo;s a few reasons why you&amp;rsquo;d want to build your own kernel, however, most of the point below aren&amp;rsquo;t significant.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access recent enhancements not yet available in Linux distributions, in my case, I wanted to continue to run CentOS on
my server, but build my kernel myself to access modern features (prebuilt unsupported Kernels are available).&lt;/li&gt;
&lt;li&gt;Learning exercise - this was my secondary reason.&lt;/li&gt;
&lt;li&gt;Creating a slimmer Kernel for performance (with many modules only being loaded when needed, I&amp;rsquo;m not sure on this)&lt;/li&gt;
&lt;li&gt;Security&amp;rsquo;s sake (this requires you to be vigilant to always update the Kernel yourself)&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&#34;install-dependencies:963bb6f9b83ef4218a359746e4b5b032&#34;&gt;Install Dependencies&lt;/h1&gt;

&lt;p&gt;In this case, we want tools like &lt;code&gt;gcc&lt;/code&gt; (for compiling) and &lt;code&gt;ncurses&lt;/code&gt; (for menuconfig):&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ sudo yum groupinstall &amp;quot;Development Tools&amp;quot;
$ sudo yum install ncurses-devel -y
&lt;/code&gt;&lt;/pre&gt;

&lt;h1 id=&#34;fetch-the-kernel-source:963bb6f9b83ef4218a359746e4b5b032&#34;&gt;Fetch the Kernel source&lt;/h1&gt;

&lt;p&gt;Go to &lt;a href=&#34;https://www.kernel.org/&#34;&gt;kernel.org&lt;/a&gt; to fetch the kernel version that you&amp;rsquo;d like, most likely, you&amp;rsquo;ll just want
the latest stable.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ mkdir linux
$ cd linux/
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.18.1.tar.xz
$ tar xfJ linux-3.18.1.tar.xz
$ cd linux-3.18.1/
&lt;/code&gt;&lt;/pre&gt;

&lt;h1 id=&#34;configure-the-kernel:963bb6f9b83ef4218a359746e4b5b032&#34;&gt;Configure the Kernel&lt;/h1&gt;

&lt;p&gt;To preconfigure my kernel, I&amp;rsquo;m using the existing configuration options based on CentOS&amp;rsquo;s current running Kernel.
This&amp;rsquo;ll help me compile everything I need, and just look for the specific options I want or don&amp;rsquo;t want.&lt;/p&gt;

&lt;p&gt;The Kernel also comes with options to build a default config (&lt;code&gt;make defconfig&lt;/code&gt;), but I don&amp;rsquo;t know exactly how this
configuration is built. Some options are set in &lt;code&gt;arch/x86/configs/x86_64_defconfig&lt;/code&gt;, but not all.&lt;/p&gt;

&lt;p&gt;But for the moment, I&amp;rsquo;ll reuse CentOS&amp;rsquo;s but you could use the default by running &lt;code&gt;make defconfig&lt;/code&gt; instead of the
following:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ cp /boot/config-`uname -r ` .config
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you&amp;rsquo;re using CentOS&amp;rsquo;s default from an older Kernel, you&amp;rsquo;ll want to upgrade the .config file to incorporate the new
options. The command &lt;code&gt;make olddefconfig&lt;/code&gt; will upgrade the .config file, setting the default values without prompting.
Alternatively, you can run &lt;code&gt;make oldconfig&lt;/code&gt; which prompts for answers to each configuration option. Only run this if
you&amp;rsquo;re upgrading the .config file (not using a .config file build from &lt;code&gt;make defconfig&lt;/code&gt;).&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ make olddefconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;From here, I use &lt;code&gt;menuconfig&lt;/code&gt;, a ncurses based interface to choose Kernel configuration options. There&amp;rsquo;s also GTK+ (&lt;code&gt;make
gconfig&lt;/code&gt;), QT (&lt;code&gt;make xconfig&lt;/code&gt;), a purely text based option (&lt;code&gt;make config&lt;/code&gt;).&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ make menuconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;h1 id=&#34;compile-the-kernel:963bb6f9b83ef4218a359746e4b5b032&#34;&gt;Compile the Kernel&lt;/h1&gt;

&lt;p&gt;This process takes a while, depending on options and CPU age and number of CPUs, less so disk IO speed. Optionally, and
likely recommended, is to supply the &lt;code&gt;-j [jobs]&lt;/code&gt; flag, this specifies the number of jobs running concurrently - you&amp;rsquo;ll
want to set this number to the number of CPUs in your system.&lt;/p&gt;

&lt;p&gt;In my case, because I have 2 CPUs, I would run:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ make -j 2
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Because I wanted to time the events, and run at a higher concurrency rate (long story short, it didn&amp;rsquo;t perform any
better), I used the following command to also include timing information:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ START_DATE=`date`; time make -j4; echo &amp;quot;Start date: $START_DATE&amp;quot;; echo -n &amp;quot;End date: &amp;quot;; date
real    350m30.902s
user    440m34.103s
sys     224m7.598s
Start date: Wed Dec 24 09:42:43 ACDT 2014
End date:   Wed Dec 24 15:33:14 ACDT 2014
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Grab a tea, this&amp;rsquo;ll take a while if you&amp;rsquo;re running older hardware (in my case, a Virtual Box guest on a 2008 2.4GHz
Core 2 duo).&lt;/p&gt;

&lt;h1 id=&#34;install-the-kernel:963bb6f9b83ef4218a359746e4b5b032&#34;&gt;Install the Kernel&lt;/h1&gt;

&lt;p&gt;This process is far quicker, but may still take a few minutes depending on your disks. Note, only now do we need to
run as a root user.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ sudo make modules_install
$ sudo make install
&lt;/code&gt;&lt;/pre&gt;

&lt;h1 id=&#34;finishing-touches:963bb6f9b83ef4218a359746e4b5b032&#34;&gt;Finishing Touches&lt;/h1&gt;

&lt;p&gt;Your Kernel is not built and installed, it &lt;em&gt;should&lt;/em&gt; have also updated GRUB configuration. But in some cases, you may
need to run &lt;code&gt;mkinitrd&lt;/code&gt;, but it appears &lt;code&gt;make install&lt;/code&gt; did more work than I expected.&lt;/p&gt;

&lt;p&gt;In my final case, I wanted this new Kernel to be my default, so I set it like so:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;$ sudo grub2-set-default 0
&lt;/code&gt;&lt;/pre&gt;

&lt;h1 id=&#34;additional-reading:963bb6f9b83ef4218a359746e4b5b032&#34;&gt;Additional Reading&lt;/h1&gt;

&lt;p&gt;See &lt;a href=&#34;http://www.kroah.com/lkn/&#34;&gt;Linux Kernel in a Nutshell&lt;/a&gt;.&lt;/p&gt;
</description>
    </item>
    
  </channel>
</rss>