Year round learning for product, design and engineering professionals

Code 16: Rediscovering the Server – Josh Duck

Josh Duck is a Brisbane native and front-end engineer at Facebook. He’s contributed to React, managed the open source effort behind Relay, and has helped build Facebook products like Profile and Search.

Josh came to last year’s Code conference to give a presentation that fascinated all of us as a case study of sorts – it’s not often you get to glimpse inside the working processes of one of world’s biggest online presences.

As the name of the talk implies, Rediscovering the Server was about Facebook engineers finding the balance between client-side and server-side that would give users the best experience, particularly from a performance perspective.

Here’s our Wrap breakdown of the key points, takeaways and caveats from Josh’s presentation.

Redsicovering the Server

Josh Duck, JavaScript, Facebook

Josh Duck

Key points

Facebook is a huge application with a lot of moving parts and, like everyone in the web industry, the people at Facebook think a lot about performance.

The phones we carry are effectively supercomputers but their power is limited without a connection.

The web, as a platform, is structured as a relationship between the browser and the server.

The web was devised as a way to obtain information stored on different computers by different programs.

Browser vendors create advanced pieces of software using very modern technologies, but they need a data source, and servers provide that source.

Servers served up static HTML pages but then users want to interact by blogging and posting photos, so the servers had to learn to interact with users.

Programming tools like Visual Basic didn’t work for the web because the application logic was on the server but the user interface logic was on the browser, often thousands of miles apart.

URLs are the interface that lets browsers request, obtain and display content from web servers.

Server-side rendering allowed users with a browser and a URL or domain to access content, but now we’re focusing on client-side rendering, because we are asking more of our applications.

“The performance of our applications is determined by how the web is structured.”

Takeaways

When the iPhone came out about 10 years ago, it raised the bar for what we expected from our applications in terms of look and feel.

To meet these expectations, there had to be a way of limiting the time that constant server-client round trips take – and that’s where JavaScript came in.

Even then, we had an immature ecosystem based on an unpolished developer experience, until jQuery papered over the things that were hard to understand in JavaScript and allowed us to design apps for the browser.

Once the logic was moved to the browser, it delivered a new problem to be solved – load time. Even using React to render the client experience – which was great for the developer experience – didn’t deliver what was required for users: performance.

Now we have great JavaScript libraries tools that let us create great experiences with interactivity but we’re still figuring out how to make it fast.

With native apps, releasing new versions and updates every two weeks was seen as fast, but web apps were being updated several times a day. That improves the product but doesn’t enhance the user experience if it takes too much time.

Universal JavaScript uses isomorphic rendering to load client templates on the server in advance, which gets HTML to the client quickly and creates high perceived performance – but it really masks the problem as the functionality doesn’t arrive in time to meet user expectations.

Moving the application logic from the server to the client-side JavaScript has taken away what made the web work well: the ability to incrementally load web pages and sites.

What’s needed is to look at what the server is good at and what the client is good at and find a balance between the two.

Routing can help achieve this and, while there are some great routing libraries out there, Facebook built its own called MatchRoute. Together with a build process that creates bundles for specific routes, this loads what is needed and ignores what is not, or defers it.

Data fetching remained an issue, but Facebook came up with GraphQL, which helps to define data queries to only fetch what is needed, reducing the number of data fetching round trips.

With this approach, Facebook separates the What (the data needed) and the How (how the data is fetched) and refine its approach to each.

Facebook also introduced a framework called Relay, which delivers the benefits of routing, GraphQL and preloading. A lot of this can be expressed as defining the context and the relationships of the data and acting accordingly.

React

Caveats

Building a good application is not just about raw computing power. It is the dependence on servers that lies at the root of many performance issues.

There are criticisms that JavaScript is too slow, but Facebook used React in its native app to load JavaScript from the device and obtained good performance – it was the constraints of shipping code on the web that was the issue.

One approach is to aggressively cache everything and preload it, in effect pretending to be a native app, but native apps can mean shipping huge amounts of code and caching doesn’t help if the user has to wait for things to load, even it is only for the first time.

Caching also means reloading everything with every new release. The issue is not just JavaScript download time – it’s also parse time, completion time and execution time.

“Don’t make it faster. “Just don’t do it at all.”

Resources

@joshduck
slides
website
github
Relay

Tweets

Code 16: Josh Duck

Code 16: Josh Duck

Code 16: Josh Duck

delivering year round learning for front end and full stack professionals

Learn more about us

Out of any conference, Web Directions is far and away our favourite

Dave Greiner Founder, Campaign Monitor