<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<title>@alexdln</title>
		<link>https://alexdln.com/blog</link>
		<atom:link href="https://alexdln.com/rss" rel="self" type="application/rss+xml" />
		<description>Blog by @alexdln.com</description>
		<language>en-uk</language>
		<lastBuildDate>Wed, 22 Apr 2026 10:06:00 GMT</lastBuildDate>
		<item><title><![CDATA[Facets as a Formatting Engine]]></title><link>https://alexdln.com/blog/facets</link><guid isPermaLink="true">https://alexdln.com/blog/facets</guid><pubDate>Wed, 22 Apr 2026 10:06:00 GMT</pubDate><description><![CDATA[Formatting is an important tool for emphasizing your point. We use it to highlight key parts of our work every day, including when developing services. One interesting example is Facets - a formatting approach in atproto.]]></description><content:encoded><![CDATA[Formatting is an important tool for emphasizing your point. We use it to highlight key parts of our work every day, including when developing services.

WYSIWYG editors, HTML tags, Markdown symbols. These are convenient tools that let users see text exactly as we intended to convey it. Under the hood, however, this is often markup embedded directly within the text itself. What we see isn’t the same as what we store.

But there’s another way - not through markup inside the text, but through a layer on top of it. One example is Facets - a formatting approach in atproto.

Everything is a facet (except plain text)

I started thinking about this back when I was developing atsky.app (an appview for Bluesky). One of the first improvements was polls, which were displayed only in atsky, while Bluesky showed the post as usual.

Later, this same approach became the basis for displaying code and mathematical expressions. However, in Bluesky, such elements are still not displayed directly - instead, the user sees a link saying “code not supported, open in Atsky.”

https://bsky.app/profile/alexdln.com/post/3mbybcmnh522o

It was an interesting experience, but the possibilities I saw in it were far more intriguing. And I asked myself - just how much can be done with them? And as you’ve already gathered from the title - quite a lot. This article will be dedicated to that journey and the results obtained.

Formatting

First, a little about the problem facets solve - formatting.

Formatting is the process of giving data (such as text) structure, semantics, and/or visual representation according to specified rules.


The main formatting methods in development are HTML and Markdown. Both add structure through inline markup, using key expressions to define formatting boundaries within the text itself.

<p>Hello <b>World!</b></p>


But if we don’t understand this language, it looks like strange, incorrect text to us.

To address the issue of accessibility, some atproto standards, such as standard.site, recommend storing textContent separately, thereby allowing any user or service to read the record’s content, while the content blocks themselves, with all formatting, are typically stored within the “content” key.

{
  "$type": "site.standard.document",
  "site": "at://did:plc:abc123/site.standard.publication/3lwafzkjqm25s",
  "path": "/blog/getting-started",
  "title": "Getting Started with Standard.site",
  "description": "Learn how to use Standard.site lexicons in your project",
  "textContent": "Full text of the article...",
  "content": { "$type": "com.example.blog", "blocks": ["..."] },
  "publishedAt": "2024-01-20T14:30:00.000Z"
}


An alternative formatting method is the so-called annotation layer. This approach involves defining formatting boundaries not within the text itself, but in a separate data layer. One example is the facets feature in atproto.

{
  "features": [
    {
      "$type": "app.bsky.richtext.facet#link",
      "uri": "<https://alexdln.com/blog/facets>"
    }
  ],
  "index": {
    "byteEnd": 10,
    "byteStart": 16
  }
}


Currently, atproto itself uses them only in links within Bluesky. Writing-focused services like pckt.blog, leaflet.pub, and offprint.app also use them for standard formatting elements - bold, italics, underlining, etc.

How it works

As mentioned earlier, facets are an annotation layer that stores formatting instructions in a separate layer without altering the text. Let’s break down what a facet consists of:

range - the text range where formatting begins and ends. This range is specified in bytes. RichText sequentially passes through each facet and applies features to the specified range.

"index": {
  "byteEnd": 10,
  "byteStart": 16
}


features - a set of formatting instructions themselves - type (bold, italic, link) and additional data for a specific type (f.e. link URL). This is an array that formally allows you to specify multiple formatting types within a given range.

"features": [
  {
    "$type": "app.bsky.richtext.facet#link",
    "uri": "<https://alexdln.com/blog/facets>"
  }
]


And if you look at these two parts, you can see that they allow for embedding any data. Here’s an example of how the math works in atsky.app

[{
  "features": [
    {
      "$type": "app.bsky.richtext.facet#link",
      "uri": "<https://atsky.app/profile/alexdln.com/post/3mbybcmnh522o>"
    }
  ],
  "index": {
    "byteEnd": 98,
    "byteStart": 56
  }
}, {
  "features": [
    {
      "$type": "app.bsky.richtext.facet#code.latex",
      "code": "T(n) = a \\cdot n + b \\cdot \\log n + c"
    }
  ],
  "index": {
    "byteEnd": 98,
    "byteStart": 56
  }
}]


You can see overlapping facets, and this is a common feature of current implementations: if formatting has already been applied to this range, subsequent ones are ignored (since it is more likely that it was specified as a second feature within the first facet). Therefore, in bluesky the code is ignored, and in atsky the link is ignored.

Post on Atsky - https://atsky.app/profile/alexdln.com/post/3mbybcmnh522o

Post on Bluesky - https://bsky.app/profile/alexdln.com/post/3mbybcmnh522o

It is important to note that facets operate specifically in bytes. In the most basic version, their implementation would look something like this:

if (!facets.length) return text;

return facets.reduce((acc, facet, index) => {
  const nextFacetStart = facets[index + 1]
    ? bytePositionToCharPosition(text, facets[index + 1].index.byteStart)
    : undefined;

  acc.push(
    <RichTextFeature key={facet.index.byteStart} features={facet.features}>
      {text.substring(
        bytePositionToCharPosition(text, facet.index.byteStart),
        bytePositionToCharPosition(text, facet.index.byteEnd),
      )}
    </RichTextFeature>,
    <Fragment key={`${facet.index.byteStart}_next`}>
      {text.substring(bytePositionToCharPosition(text, facet.index.byteEnd), nextFacetStart)}
    </Fragment>,
  );
  return acc;
}, [
  text.substring(0, bytePositionToCharPosition(text, facets[0].index.byteStart)),
]);


Flexibility

In the example above, another clear benefit of this approach is its high cross-platform compatibility. If you support only the link - you process only the link, if you support the full set of features - you process all specified facets, if you’re unfamiliar with the system or just starting to build a tool - you simply display the text.

The service author has access to the content regardless of language, tools, environment, or runtime. You can start with text and gradually add support for more elements.

Iterative development where you can get results immediately and then gradually improve them. One of the many things we love about atproto.

Scalability

As mentioned earlier, the only standardized use of facets in atproto itself right now is links in bluesky. This is perhaps the most underrated feature in the entire protocol.

It’s a powerful tool for inline formatting, but as you’ve seen from the code and links, it’s also suitable for other elements. More precisely, for absolutely any element.

One example of its use is articles. Current services store text separately and formatting in special blocks

{
  "content": {
    "$type": "com.example.blog",
    "blocks": [{
      "$type": "com.example.blog.blockquote",
      "content": "Everything is a facet (except plain text)"
    }]
  },
  "textContent": "Everything is a facet (except plain text)"
}


But if you look at this block, you’ll notice that its content is essentially no different from the facet mentioned above. So let’s implement it as a facet

{
  "facets": [{
    "features": [
      {
        "$type": "com.example.richtext.facet#blockquote"
      }
    ],
    "index": {
      "byteEnd": 0,
      "byteStart": 41
    }
  },
  "textContent": "Everything is a facet (except plain text)"
}


And this is a perfectly valid facet, all that’s left is to integrate it if you want to support it. This way, we avoid duplicating text, and the main logic of the analysis is shifted to the rendering engine.

Types and lexicon

After a series of experiments and comparisons, I concluded that a facet can be used not only to display an expanded post on a social network, but also to generate literally any text. To make this possible, I created a lexicon, which you can view on pdsls. Here are its types as of this writing:

net.atview.richtext.facet#b,
net.atview.richtext.facet#i,
net.atview.richtext.facet#u,
net.atview.richtext.facet#code,
net.atview.richtext.facet#strikethrough,
net.atview.richtext.facet#highlight,
net.atview.richtext.facet#link,
net.atview.richtext.facet#mention,
net.atview.richtext.facet#h2,
net.atview.richtext.facet#h3,
net.atview.richtext.facet#h4,
net.atview.richtext.facet#h5,
net.atview.richtext.facet#h6,
net.atview.richtext.facet#blockquote,
net.atview.richtext.facet#codeBlock,
net.atview.richtext.facet#media,
net.atview.richtext.facet#bskyPost,
net.atview.richtext.facet#ul,
net.atview.richtext.facet#ol,
net.atview.richtext.facet#website,
net.atview.richtext.facet#horizontalRule,
net.atview.richtext.facet#iframe,
net.atview.richtext.facet#math,
net.atview.richtext.facet#hardBreak


If you’re reading this article in its original form (at alexdln.com/blog/facets), you can inspect its data by clicking the button below or on pdsls. This article is written using the lexicon and approaches described above, as well as some interesting sugar from other implementations. For example, two line breaks in the text create a new block, similar to markdown engines.

Of course, this is an experiment, not a final proposal. This approach adds significant complexity to the rendering tools themselves, whereas current implementations are easier to integrate. Nevertheless, I hope you found at least some of these ideas interesting.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreiffpkpym4iom3fzvz7pjaxgetjp66kjawfmfsl6tugyqllqexnjd4@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Open Social Software. npmx]]></title><link>https://alexdln.com/blog/open-social-software</link><guid isPermaLink="true">https://alexdln.com/blog/open-social-software</guid><pubDate>Thu, 02 Apr 2026 10:44:00 GMT</pubDate><description><![CDATA[We’ve just celebrated reaching 3,000 stars, made a number of major releases, and the team gave a really great presentation at atmosphereconf... But it seems I still hadn’t actually answered so many questions - just teased at the end of the previous article. I think it’s time to formulate the answers and finally gather them all in one place.]]></description><content:encoded><![CDATA[It’s been a month since the release of npmx.dev. We had a wonderful release week with many informative and in-depth articles not only about the experience of participating but also about the technical foundation. Thank you to everyone who is helping us on this journey, and a big thanks to everyone who supported us and shared their kind words. If you missed it, read and discover all these wonderful stories in my previous article or in the official release.

At the same time, we’ve just celebrated reaching 3,000 stars, made a number of major releases, and the team gave a really great presentation at atmosphereconf


But it seems I still hadn’t actually answered so many questions - just teased at the end of the previous article. I think it’s time to formulate the answers and finally gather them all in one place.

Prehistory

But to make the answer clearer, it’s worth going back a bit. By the time I joined, it was already a well-organized community that welcomed everyone with warmth and helped us build together. It was an intense and enjoyable time, and together we solved all the problems fairly quickly and completed most of our plans. Then we started talking more about the future, architecture, and details - how to make the service fully-fledged, stable and friendly for everyone.

We thought a lot about accessibility, design, security, performance, copywriting, and, of course, the community. This became a turning point when the code took a back seat. Yes, it happened in less than a month, but it was inevitable. After all, behind technology there is never just code, but people and an idea.

At the same time, there was a post online where we were all discussing the OSS community’s attitude toward non-developers.


I’m not one of those who thinks there’s a big difference in attitude. In most cases, the community is either friendly to everyone or largely closed off within a specific circle. And it’s rare to find a place where code is welcome but ideas and other forms of contribution are not.

Hostages of Habits


Yet in our daily lives, this difference does exist - it’s the infrastructure. When we talk about representing contributors at the service level, perhaps the first thing that comes to mind is the list of contributors on GitHub. It’s also, in a way, the last. Everything else is, at best, isolated efforts by individual projects to make the contributions of other participants visible. But despite these rare attempts, the approach itself doesn’t really change, and this has become an established habit.

But why is there so much focus on this? Don’t get me wrong - code is a very important part of OSS - its value is hard to overstate. But the main value lies in the community, in the people. If we look at projects, the most valuable thing we’re building isn’t individual features - that’s a matter of hours, days, or sometimes even weeks. The most valuable thing is peers. As we build features, the project will continue to grow, but it will gradually approach a plateau. Each new task will contribute less and less to this creation.

Peers, on the other hand, are about exponential growth. But when these peers consist only of developers or another specific group, we very quickly hit the inevitable glass ceiling. A great tool made for oneself and tailored to oneself. Going beyond this zone has always interested me - first as part of work projects, and now [even more so] as an npmx.dev member.

Not a universal experience


“But why go beyond these boundaries if it already works for a million people - it’s a tool for developers, and that’s enough for us”. And behind this lies the same principle as in any other area of our lives. When working with a specific group, we end up in a bubble from which we cease to understand other bubbles.

It’s about design - just because we like it doesn’t mean it aligns with the user experience;

It’s about accessibility, because just because we can see it and control it doesn’t mean others will “see” it too;

It’s about performance, because when it works fast for us it doesn’t mean other users will consistently get the same experience;

It’s about our representation, because what’s obvious to us might be a blind spot even for experienced engineers;

It’s about copyright, illustrations, security, stability, emotions. It’s about us. After all, we are human - unique and so different. And it is precisely this diversity that shapes our future.

An Open Community


It’s hard to call me an optimist, but I know for sure that we’re moving forward toward a bright future. That’s why one of the tasks that I - and many at npmx - set for ourselves as a key goal is to make collaboration among all of us easier, more convenient, and friendlier for everyone.

With this in mind, I participate in many discussions, and it’s largely this mindset that guided the design of npmx and parts of my vision. And yes, we have a design - you may have even seen it mentioned at atmosphereconf.

We work hard to be an open community, but we’re still limited by our services. However, in recent years, aside from AI, there’s been - in my opinion - an even more important shift that’s worth stepping back from for a moment.

At protocol

A decentralized protocol focused on ownership. Your data is owned by you, while additional infrastructure interacts with each user’s data and acts as an intermediary at the general level when necessary.

It solves the problem of connecting users. And that is exactly what we were working on. Any service, any user, any infrastructure can connect to this data and take exactly what they need, displaying it however they want and own this actions the way they want. This vision resonates with many of us, and in fact, it solves exactly what I described above. That’s why we’re built on it - not on specific, off-the-shelf tools but on the protocol and the vision.

This site, by the way, is also powered by atproto, and the article you're reading now is stored in my pds in site.standard.document lexicon

Social Layer

We loosely refer to this level of functionality as the social layer. It’s where all user and package interactions take place - projects, profiles, likes, saves, shares. This already makes up a large part of the project, but in essence, we’re just getting started in this direction. One of the first such updates was likes, and to my delight, the wonderful Svelte has been and remains the leader all this time (now it's already 191).


Nowadays, in our daily lives, we interact with dozens of services just to get basic information - to check stars and downloads, read documentation, view source code, look at vulnerabilities and issues, follow authors, join Discord, check releases, read the author’s articles…

This list goes on and on. And it’s even longer for the project author who shares all of this with their beloved community. The true magic of OSS lies in the people who do this, day in and day out. Thank you for that!

You’re making magic happen, and we can’t do it for other, but we can help make it easier. Npmx has already brought together npm view, GitHub stats, Social Layer, e18e and many more in one place. Integrations with an even longer list of services are coming soon

But even this still largely keeps us in our bubble.

Contribution as a layer

Thinking about how to break out of this bubble, I’ve been talking a lot with people both inside and outside the community in search of details and solutions, including in discussions on the post mentioned earlier

We have many people ready to join us on the OSS journey, but sometimes there simply isn’t room for them. Some contributions simply go unnoticed, except for our personal appreciation for these people. We try to speak up about what our community does and value it a lot (endlessly). But we want to show this even more, including at an automated level.

That’s why another idea I hope we can implement is “contribution interactions”. Imagine a social network with users, posts, comments, and embeds. But instead of social media accounts - you have tech profiles, instead of posts - releases and articles, instead of comments - ideas and tasks, and instead of likes - contributor recognition and reactions.

Every PR is an important contribution to the project, but equally important contributions include tasks, designs, suggestions, ideas, bug reports, and even posts expressing disagreement. One of the main ideas is to make this visible. In that very same unified space that has already become home to all our daily services, a new layer will appear to strengthen the connections between these areas. After all, to be a contributor, you don’t necessarily have to be on GitHub - it’s enough to love the idea just as much as we do. And our task is to find a way to say thank you in return. To everyone, everywhere.

After all, behind every technology there are always ideas and people.

A few other updates

npmx stays true to its values, remaining not only the service I love so much, but also a warm, welcoming community that I’m so happy and proud to be a part of. Made by wonderful alfon.dev


We conducted some research with the team and shared the results on April 1st!


Charts, as well as many other details, have become even more accessible


Comparing packages has become much more convenient


A wonderful version page has appeared



And much, much more…

We’re building this project for people, together. Join us on this great journey.

Site • Discord • Bluesky • GitHub]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreihhqp7qxyk566tytkqy6b4jcyv3gvs2yzwscznnnbe4pfb7uwixr4@png" type="image/jpeg" /></item>
		<item><title><![CDATA[The month. npmx]]></title><link>https://alexdln.com/blog/the-month-npmx</link><guid isPermaLink="true">https://alexdln.com/blog/the-month-npmx</guid><pubDate>Tue, 03 Mar 2026 13:25:00 GMT</pubDate><description><![CDATA[npmx - a fast, modern browser for the npm registry. You've definitely heard about this project, and it’s probably got you just as excited as it has me. We've had a month of exciting and invaluable experience, and we've got some crazy plans ahead. I'm Alex, a project maintainer and one of the many who have been lucky enough to witness the formation of npmx from the front lines. This is an article reflecting on the project, warm stories, wonderful people, and, of course, a look into the future. I hope I can convey this marvelous experience and give you the opportunity to feel it with me.]]></description><content:encoded><![CDATA[npmx - a fast, modern browser for the npm registry. You've definitely heard about this project, and it’s probably got you just as excited as it has me. We've had a month of exciting and invaluable experience, and we've got some crazy plans ahead. I'm Alex, a project maintainer and one of the many who have been lucky enough to witness the formation of npmx from the front lines. This is an article reflecting on the project, warm stories, wonderful people, and, of course, a look into the future. I hope I can convey this marvelous experience and give you the opportunity to feel it with me.


The moment 

London, winter, one of the longest seasons of rain and complete absence of sun. January is considered the most depressing month here. At this time, AI is taking over the world, and many large OSS projects are either declining or being bought by corporations. I work on my small projects day after day to distract myself from this race and relax. During breaks, I share solutions on bluesky and scroll through my feed. Everything happens in the same way on a daily basis - days that have been repeating for years - but on that day, one detail seemed unusual.


More and more people, whom I always read with such interest, began to talk about a certain project for npm. No details, no links, no information on Google. Nothing, but it was enough to grab my attention. Although I am an introvert, it is people who have always inspired me to move forward. And I saw many people who are important to me in this stream (and I will see hundreds more, but more on that later).





Acquaintance 

This went on for several days - I continued to finish my projects robindoc.com & atsky.app, London remained cloudy, the world of development became increasingly alien to me, and time and again I came across a project that had already piqued my interest. It was no longer just about people. The idea of building a project focused on developers in OSS today seemed bold, crazy.

Did I see only a browser for npm in these posts? No, definitely not, but I'm getting ahead of myself again. And now I finally found a link. And it was a match. The idea, the design, the views, the values. Later, we joked that npmx was suspiciously similar to my projects by vibe. But these values are a reflection of all of us, and that's why npmx attracted so many wonderful people. At that moment, I was surfing the site with interest, enjoying the speed and design, and... Of course, I started looking at myself and my packages. Looking at the documentation, statistics, vulnerabilities, versions, metadata, release channels - so much new information was gathered on one page in a convenient interface.


Well, almost convenient - the release channels didn't really work out. My long versions in the experimental channel of some packages broke the interface.


Out of habit, I open devtools, quickly find the cause, and decide to write about it so that I don't get caught up in it in the future. To my great delight, I find a panel with all the links in the footer.


Contributions 

Both Discord and GitHub are invaluable parts of this project that are difficult to talk about separately. But first, a little about the code. I already mentioned that this project attracted me because of the people behind it. And here I omitted an important detail - despite the fact that I have spent almost all of my career working with React & Next, most of my subscriptions are to wonderful authors from other ecosystems — Svelte, OCaml, Astro, Vue.
What am I getting at? npmx turned out to be written in Vue & Nuxt, an ecosystem I'm practically unfamiliar with. But the bug I found was simple, and my interest was strong. So, without thinking twice, I forked it, read the contributing guide, searched, fixed, checked, PR, merged, and released. Did I go through the steps too quickly? Actually, that's exactly how it went. I think installing the dependencies took longer than the rest of the cycle.


My interest was piqued once again.

The whole process was unexpectedly simple. Once you open the project, you immediately stop noticing that this technology is new to you. You just feel it, and that's how it all works. And if you do miss something, the tooling is already set up to check every part, from stability to accessibility. And if there are still gaps after that, you will be reviewed by some of the most experienced people in this ecosystem, who do so honestly, warmly, and wholeheartedly, trying to teach and help you.


Collaborations 

And here we return to a familiar topic - people. I have mentioned this factor many times and will mention it many more later. It is the people behind the project that make it special. More than 200 people, many of whom I had the pleasure of working with to come up with ideas, make corrections, check things, and just communicate.


This is a place where time flows differently. Where up to 100 people participate in decision-making, backing up their arguments with experience and sources. Where ideas are born on the fly and immediately find executors. Where assumptions are backed up by advice from experts in a11y, SEO, performance, etc. Where translations for many languages are added with a single message asking a question. Where everyone works on different things, but still on the same.

We heatedly discussed button designs, decided on the cursor, tested the search functionality, worked out the accessibility of every detail, and suggested ideas for graphs (and watched in amazement as Alec Lloyd Probert solved them quickly and elegantly). This list could go on forever, and each of these stories deserves its own article.

This is a place where friends gather. We can't say how long we will continue to develop npmx, but these connections will remain with us for many years to come. And I am happy to play my part in this.





Maintainer 

Almost a week had passed, and I continued to complete task after task that bothered me. At the same time, without me even noticing, the whole project became familiar and dear to me, as if I had been working on it for the last few years. We consulted, reviewed, found mistakes, corrected, and made some bugs. But day after day, the project continued to grow exponentially. "How quickly other people’s kids grow", joked I to myself.

It was growth that I would not have believed possible, but here it was an exception - there were amazing people behind this project. Meanwhile, I had practically completed my internal pool of tasks, as had many other participants in the story. Conversations about problems were replaced by discussions about plans, architecture, and standards. Conversations without a source of truth, without managers, without customers, without needs. Built around one goal - to give the web environment and OSS a new life. And, to my infinite happiness, we agreed on the vision for the next steps. And against this backdrop, I was invited to join the maintainers team.


I had already grown to love this project and, of course, was very grateful and happy to receive such an invitation. But for me, it's more than just a role, perhaps even more important than on everyday job. I came to the project because of the people, and for the same reason I fell in love with it. Every member of an open community can either empower others or destroy their interest at all. Will I be able to maintain my enthusiasm for a long time, and how long will I be able to be an active member of this community...

I had a warm and honest conversation with the project stewards, and we agreed on almost everything, as we did with all the participants in the story before. And most importantly, first and foremost, we are simply making our everyday experience better, together and for everyone. We agreed that I will stay here as long as I can, and, as it seems now, the energy of this community will keep me warm for a long time to come.


Marathon 

Meanwhile, the project continued to move forward. More and more ideas, more and more communication, more and more issues and PRs. We were accelerating at an incredible pace, working more and more every day. During breakfast, you discuss an idea in Discord, then reviews, issues, plans, *cut*, and suddenly it's 3 a.m., and you're still actively discussing the next PR. On each of these nights, we successfully gained milliseconds and literally every micro-reaction. And the next morning, during breakfast, you go into the chat, see the discussion of the idea, *cut*...


An incredible pace, exponential growth every day, dozens of ideas, hundreds of issues per day, thousands of messages. At the same time, every day new experienced and strong participants arrive, ready to solve problems from the very first minute. A pace that any startup dreams of, a team that any corporation would envy. Yes, I am once again bringing you back to people, because behind all this magic there were already almost 500 people. Each of whom not only sincerely loves the project, but also the people around us. This is what distinguishes it from corporations and startups. So the next step, unexpected and strange to many outsiders, was taken - we announced a vacation.


Vacation 

One week, complete blocking of channels and repositories, disabling discussions and stopping updates, canceling invitations to Discord, pausing posts and technical ideas. And all this in a project that was already used by tens of thousands of people. But it was much more than just a desire to rest. We cannot create a wonderful experience for users if we don't make the experience better for each other. It seems that this philosophy was taken from nuxt and vite, but here we gave it a completely new and warm format.

In the remaining days, we fixed bugs, checked stability, reviewed PRs, and prepared to go out and touch the grass. Then we all got together, celebrated another wonderful day, and went to rest with anticipation of our plans.

For me, this was the first time in my entire career that I wasn't programming because I wanted to (rather than because of illness or relocation). And I did what I always loved to do - take leisurely walks with my camera through the green corners of London. At the same time, it was a great opportunity to tackle some long-standing projects - articles, design, and social media.


We simply lived and did what we enjoyed, letting go of the usual programming race for a while. Touching the grass wasn't a strict rule, and we touched the snow, trees, and our beloved pets. Sometimes the rule was completely disregarded, so we touched the grass like this:


At the same time, we discussed tastes, countries, shops, cheeses, pizza, places, hobbies, plans, and jokes. Practically strangers to each other, from different corners of the earth - with similar questions, tastes, and problems - we turned these evenings into heartfelt online gatherings.

And then we celebrated 2,000 stars.


The vacation was coming to an end, and a big release was ahead of us. March 3rd, the day when we would openly talk about ourselves and our future for the first time. A month of the project's life was already behind us, a month of wonderful experiences and warm stories. But behind these stories, I left out another, equally important detail - ideas and meaning. And when to talk about greatness if not after a vacation?


Reasons 

"A fast, modern browser" is a phrase you've heard many times before. After all, speed and ease of surfing are important factors, and you'll feel it every fraction of a second you spend on the site. I love this phrase - it's very concise and says a lot, but still not everything.

Behind this project are ideas and visions that we have gathered over years of interacting with services. Everyone has their own experience and priorities, which makes it difficult to choose a single vector. The service is growing in many directions at once: accessibility, community features, data-focused tools, ownership, speed, and so on. But if we simplify it all into one rule, it would probably be "everything you need for development, fast and for everyone."

Fortunately, we know how to achieve this. Most of us have created tools and services before, many for the atproto ecosystem and with a focus on user experience. These are hundreds of experts, each in their own field, who came to npmx to work on a project they love and value. Not just as a service, but as a tool we use every day.
Npmx is much more than a browser - it is a service focused on people. Its goal is to provide quick and high-quality choices, where metrics are based on the opinions of other participants, and where the interface is designed for quick search results rather than retention.

I enjoy working on ideas, optimizations, and design in this system. There are still many opportunities ahead, and we are just getting ready to take action.


The Future 

And now we have gone through this wonderful month together. I am finishing this part literally in the last moments before the launch of the service, and perhaps you have even noticed how feeling of this article has changed.


Looking back on those January days, I realize how much has changed. Not only with me and the project, but with the entire ecosystem. Community support, OSS activation, changes in npm, and independent interfaces for services. Today, we casually talk about a new service with a convenient interface for GitHub, but just a couple of weeks ago, we were jokingly discussing the post "when githubX". "Bold", I would have said then, and "inevitable", I say without a doubt today.

All we needed was people. And luckily, we stumbled upon that wonderful series of posts. We spent this month together building connections, ideas, and architecture. It takes a lot of time, but we are close to completing the strongest foundation. And we are getting ready to build even more amazing opportunities. With a focus on people, their experience, their convenience, and their connections.

Just as we found value in each other, any project can become the center of a network of wonderful connections.


We build in open source, but the most valuable stuff happens in our chats and conversations with each other. Join us ❤️

Site • Discord • Bluesky • GitHub
]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreig5x5hq2vw3hjhm2ozeydg32j6fqw5v6qpnmkuq6dlw63ihfemu3a@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Next.js v15 — What’s new under the hood]]></title><link>https://alexdln.com/blog/nextjs-v15</link><guid isPermaLink="true">https://alexdln.com/blog/nextjs-v15</guid><pubDate>Thu, 17 Oct 2024 12:16:00 GMT</pubDate><description><![CDATA[Hello! This is another article about next.js. And finally, about the new version! Each release is a set of new, interesting, and controversial features. This version will be no exception. However, new version is interesting not so much for its new functionality, but for the change in priorities and organization in next.js. And yes, as you may have guessed from the title, a significant part of this release is valuable for reflecting on previous mistakes.]]></description><content:encoded><![CDATA[Hello! This is another article about next.js. And finally, about the new version! Each release is a set of new, interesting, and controversial features. This version will be no exception. However, new version is interesting not so much for its new functionality, but for the change in priorities and organization in next.js. And yes, as you may have guessed from the title, a significant part of this release is valuable for reflecting on previous mistakes.

I’ve been working with next.js since around version 8. All this time I’ve been watching its development with interest (sometimes not without disappointment). Recently, I’ve published a series of articles about struggling with the new App Router — “Next.js App Router. A path to the future or a wrong turn”, “Next.js caching. A gift or a curse”, “More libraries for the god of libraries or how I rethought i18n”. All of these were a result of very weak development of ideas and capabilities in previous versions of next.js. And because of this, my interest in the new version has only grown. Along with that, there’s a desire to understand the vector of changes in the framework.

In this article, I won’t dwell on what App Router or server components are — these are described in detail in previous articles. We’ll focus only on the new version and only on the new changes.

Note: The article reflects the most interesting changes from the author’s perspective. They differ from the official list, as the author selected them from commits and PRs in the framework.


Next.js v15 Release

First, a bit about changes in the internal development processes of next.js. For the first time, the framework team has published a release candidate (RC version). Obviously, they did this due to the React.js team’s decision to publish React v19 RC.

Usually, the next.js team in their stable releases calmly uses react from the “Canary” release branch (this branch is considered stable and recommended for use by frameworks). This time, however, they decided to do things differently (spoiler alert — not in vain).

The plan for both teams was simple — publish a pre-release version, let the community check for issues, and in a couple of weeks publish a full release.


It’s been over six months since the release candidate of React.js was released, but the stable version still hasn’t been published. The delay in releasing the stable version of React.js has impacted next.js’s plans as well. Therefore, contrary to tradition, they published a total of 15 additional patch versions while already working on the 15th version (usually 3–5 patches and then a release). What’s noteworthy here is that these patch versions didn’t include all accumulated changes, but only addressed critical issues, which also deviates from next.js’s usual processes.

The basic release process in next.js is that everything merges into the canary branch, and then, at some point, this branch is published as a stable release.However, as a result, the next.js team decided to decouple from the React.js release and publish a stable version of the framework before the stable version of React.js is released.


Documentation Versioning

Another very useful organizational change. Finally, it’s possible to view different versions of the documentation. Here’s why this is so important:

Firstly, updating next.js can often be quite a challenging task due to major changes. In fact, this is why there are still over 2 million downloads for version 12 and over 4 million for version 13 monthly (to be fair, version 14 has over 20 million downloads).

Consequently, users of previous versions need documentation specific to their version, as the new one might be rewritten for a half.


Another problem is that Next.js essentially uses a single channel. Documentation changes are also made to it. Therefore, descriptions of changes from canary versions immediately appeared in the main documentation. Now they are displayed under the “canary” section.


React usage

At the beginning, I mentioned that Next.js is currently using the RC version of React.js. But in reality, this is not quite true, or rather not entirely true. In fact, Next.js is currently using two React.js configurations: the 19th canary version for App Router and the 18th version for Pages Router.

Interestingly, at one moment they wanted to include the 19th version for Pages Router as well, but then rolled back these changes. Now, full support for React.js version 19 is promised after the release of its stable version.

Along with this, the new version will have several useful improvements for server actions functions (yes, the React team renamed them):


I suppose I’ll include Next.js’s new feature in this section as well — the Form component. Overall, it’s the familiar form from react-dom, but with some improvements. This component is primarily needed if successful form submission involves navigating to another page. For the next page, the loading.tsx and layout.tsx abstractions will be pre-loaded.


import Form from 'next/form'
 
export default function Page() {
  return (
    <Form action="/search">;
      {/* On submission, the input value will be appended to 
          the URL, e.g. /search?query=abc */}
      <input name="query" />;
      <button type="submit">Submit</button>;
    </Form>;
  )
}

Developer Experience (DX)

When talking about Next.js, we can’t ignore the developer experience. In addition to the standard “Faster, Higher, Stronger” (which we’ll also discuss, but a bit later), several useful improvements have been released.

Long-awaited support for the latest ESLint. Next.js didn’t support ESLint v9 until now. This is despite the fact that both eslint itself (v8) and some of its subdependencies are already marked as deprecated. This resulted in an unpleasant situation where projects were essentially forced to keep deprecated packages.

The error interface has been slightly improved (which in Next.js is already clear and convenient):


A “Static Indicator” has been added — an element in the corner of the page showing that the page has been built in static mode. Overall, it’s a minor thing, but it’s amusing that they included it in the key changes as something new. The indicator for a “pre-built” page has been around since roughly version 8 (2019) and here, essentially, they’ve just slightly updated it and adapted it for the App Router.


A directory with debugging information has also been added — .next/diagnostics. It will contain information about the build process and all errors that occur. It's not yet clear if this will be useful in daily use, but it will certainly be used when troubleshooting issues with Vercel devrels (yes, they sometimes help to solve problems).


Changes in the Build Process

After discussing DX, it’s worth talking about the build process. And along with it, Turbopack.


Turbopack

And the biggest news in this area. Turbopack is now fully completed for development mode! “100% of existing tests passed without errors with Turbopack”

Now the Turbo team is working on the production version, gradually going through the tests and refining them (currently about 96% complete)


Turbopack also adds new capabilities:


const nextConfig = {
  experimental: {
    turbo: {
      treeShaking: true,
      memoryLimit: 1024 * 1024 * 512 // in bytes / 512MB
    },
  },
}

These and other improvements in Turbopack “reduced memory usage by 25–30%” and also “accelerated the build of heavy pages by 30–50%”.


Other

Significant style issues have been fixed. In version 14, situations often arose where the order of styles was broken during navigation, causing style A to become higher than style B, than vice versa. This changed their priority and consequently, elements looked different.

The next long-awaited improvement. Now the configuration file can be written in TypeScript — next.config.ts


import type { NextConfig } from 'next';
 
const nextConfig: NextConfig = {
  /* config options here */
};
 
export default nextConfig;

Another interesting update is retrying attempts for static page builds. This means if a page fails at build time (for example, due to internet problems) — it will try to build again.


const nextConfig = {
  experimental: {
    staticGenerationRetryCount: 3,
  },
}

And to conclude this section, a functionality highly desired by the community — the ability to specify the path to additional files for building. With this option, you can, for example, specify that files are located not in the app directory, but in directories like modules/main, modules/invoices.

However, at the moment, they have only added it for internal team purposes. And in this version, they definitely won’t present it. Going forward, it will either be used for Vercel needs, or they will test it and present it in the next release.


Changes in the Framework API

The most painful part of Next.js updates — API changes. And in this version, there are also breaking updates.

Several internal framework APIs have become asynchronous — cookies, headers, params and searchParams (so-called Dynamic APIs).


import { cookies } from 'next/headers';
 
export async function AdminPanel() {
  const cookieStore = await cookies();
  const token = cookieStore.get('token');
  // ...
}

It’s a major change, but the Next.js team promises that all this functionality can be updated automatically by calling their codemod:

npx @next/codemod@canary next-async-request-api .Another change, but probably not relevant to many. The keys geo and ip have been removed from NextRequest (used in middleware and API routes). Essentially, this functionality only worked in Vercel, while in other places developers made their own methods. For Vercel, this functionality will be moved to the @vercel/functions package

And a few more updates:


const nextConfig = {
  images: {
    localPatterns: [
      {
        pathname: '/assets/images/**',
        search: 'v=1',
      },
    ],
  },
}

Caching

In my personal opinion, this is where the most important changes for Next.js have occurred. And the biggest news is — Caching is now disabled by default! I won’t go into detail about caching problems, this was largely covered in the article “Next.js Caching. Gift or Curse”.Let’s go through all the main changes in caching:


const nextConfig = {
  experimental: {
    staleTimes: {
      dynamic: 30 // defaults to 0
    },
  },
}

const nextConfig = {
  experimental: {
    serverComponentsHmrCache: false, // defaults to true
  },
}

That’s regarding the “historical misunderstandings”. New APIs will also appear in Next.js. Namely, the so-called Dynamic I/O. It hasn't been written about anywhere yet, so the following will be the author's guesses based on the changes.

Dynamic I/O appears to be an advanced mode of dynamic building. Something like PPR (Partial Prerendering), or more precisely, its complement. In short, Partial Prerendering is a page building mode where most elements are built at build time and cached, while individual elements are built for each request.

So, dynamic I/O [probably] finalizes the architecture for this logic. It expands caching capabilities so that it can be enabled and disabled precisely depending on the mode and place of use (whether in a "dynamic" block or not).


const nextConfig = {
  experimental: {
    dynamicIO: true, // defaults to false
  },
}

Along with this, the "use cache" directive is added. It will be available in nodejs and edge runtimes and, apparently, in all server segments and abstractions. By specifying this directive at the top of a function or a module exporting a function - its result will be cached. The directive will only be available when dynamicIO is enabled.


async function loadAndFormatData(page) {
  "use cache"
  ...
}

Also, specifically for use cache, methods cacheLife and cacheTag are added


export { unstable_cacheLife } from 'next/cache'
export { unstable_cacheTag } from 'next/cache'

async function loadAndFormatData(page) {
  "use cache"
  unstable_cacheLife('frequent');
  // or
  unstable_cacheTag(page, 'pages');
  ...
}

cacheTag will be used for revalidation using revalidateTag, and cacheLife will set the cache lifetime. For the cacheLife value, you'll need to use one of the preset values. Several options will be available out of the box ("seconds", "minutes", "hours", "days", "weeks", "max"), additional ones can be specified in next.config.js:


const nextConfig = {
  experimental: {
    cacheLife?: {
      [profile: string]: {
        // How long the client can cache a value without checking with the server.
        stale?: number
        // How frequently you want the cache to refresh on the server.
        // Stale values may be served while revalidating.
        revalidate?: number
        // In the worst case scenario, where you haven't had traffic in a while,
        // how stale can a value be until you prefer deopting to dynamic.
        // Must be longer than revalidate.
        expire?: number
      }
    }
  }
}

Partial Prerendering (PPR)

Probably the main feature of the next release. As mentioned earlier, PPR is a page building mode where most elements are assembled at build time and cached, while individual elements are assembled for each request. At the same time, the pre-built part is immediately sent to the client, while the rest is loaded dynamically.


The functionality itself was introduced six months ago in the release candidate as an experimental API. This API will remain in this state, and we will likely see it as stable only in version 16 (which is good, as major functionality often transitioned to stable within six months to a year).

Regarding the changes. As mentioned earlier, it primarily updated the working principles. However, from the perspective of using PPR, this hardly affected anything. At the same time, it received several improvements:

Previously, there was just a flag in the config, but now to enable PPR, you need to specify ‘incremental’. This is apparently done to make the logic more transparent — content can be cached by developers even in PPR, and to update it, you need to call revalidate methods.


const nextConfig = {
  experimental: {
    ppr: 'incremental',
  },
}

Also, previously PPR was launched for the entire project, but now it needs to be enabled for each segment (layout or page):


export const experimental_ppr = true

Another change is Partial Fallback Prerendering (PFPR). It’s precisely due to this improvement that the pre-built part is immediately sent to the client, while the rest is loaded dynamically. In place of dynamic elements, a callback component is shown during this time.


import { Suspense } from "react"
import { StaticComponent, DynamicComponent } from "@/app/ui"
 
export const experimental_ppr = true
 
export default function Page() {
  return {
    <>
      <StaticComponent />
      <Suspense fallback={...}>
        <DynamicComponent />
      </Suspense>
    </>
  };
}

Instrumentation

Instrumentation is marked as a stable API. The instrumentation file allows users to hook into the lifecycle of the Next.js server. It works across the entire application (including all segments of Pages Router and App Router).

Currently, instrumentation supports the following hooks:

register - called once when initializing the Next.js server. It can be used for integration with observability libraries (OpenTelemetry, datadog) or for project-specific tasks.

onRequestError - a new hook that is called for all server errors. It can be used for integrations with error tracking libraries (Sentry).


export async function onRequestError(err, request, context) {
  await fetch('https://...', {
    method: 'POST',
    body: JSON.stringify({ message: err.message, request, context }),
    headers: { 'Content-Type': 'application/json' },
  });
}
 
export async function register() {
  // init your favorite observability provider SDK
}

Interceptor

Interceptor, also known as route-level middleware. It’s something like a full-fledged [already existing] middleware, but unlike the latter:


Moreover, when creating an interceptor file, all pages below in the tree become dynamic.


import { auth } from '@/auth';
import { redirect } from 'next/navigation';

const signInPathname = '/dashboard/sign-in';

export default async function intercept(request: NextRequest): Promise<void> {
  // This will also seed React's cache, so that the session is already
  // available when the `auth` function is called in server components.
  const session = await auth();
  if (!session && request.nextUrl.pathname !== signInPathname) {
    redirect(signInPathname);
  }
}

// lib/auth.ts
import { cache } from 'react';
export const auth = cache(async () => {
  // read session cookie from `cookies()`
  // use session cookie to read user from database
})

Speaking of Vercel, middleware will now be effective as a primary simple check at the CDN level (thus, for example, immediately returning redirects if the request is not allowed), while interceptors will work on the server, performing full-fledged checks and complex operations.

In self-hosting, however, such a division will apparently be less effective (since both abstractions work on the server). It may be sufficient to use only interceptors.


Conclusions

Overwriting fetch, aggressive caching, numerous bugs, and ignoring community requests. The Next.js team made erroneous decisions, rushed releases, and held onto their views despite community feedback. It took almost a year to recognize the problems. And only now, finally, there’s a sense that the framework is once again addressing community issues.

On the other hand, there are other frameworks. A year ago, at the React.js presentation, it seemed that all frameworks would soon be on par with Next.js. React started mentioning Next.js less frequently as the main tool, frameworks were showcasing upcoming build systems, support for server components and functions, and a series of global changes and integrations. Time has passed, and essentially, none of them have reached that point yet.

Of course, final conclusions can only be drawn after some time, but for now, it feels like the changes in React.js, instead of the expected leveling of frameworks, have led to even greater dominance of Next.js and a wider divergence between frameworks (since the implementation of server components and actions was left to the discretion of the frameworks).

At the same time, OpenAI switched to Remix (“due to its greater stability and convenience”):


And apparently they started before significant changes in Next.js


In general, in the next stateofjs and stackoverflow surveys, we are likely to see significant reshuffling.

Credits
Code examples or their foundations are taken from next.js documentation, as well as from commits, PRs, and the next.js core;

Postscript
If you need a tool for generating documentation based on MD files — take a look at robindoc.com, if you work with next.js — you might find something useful in the solutions at nimpl.tech.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreiakwqefehgxoi6jofgwwqqbqdweuda3tsyt4vklrcx57bgtcp2g64@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Website Performance. Big Basic Checklist]]></title><link>https://alexdln.com/blog/website-performance-basic-checklist</link><guid isPermaLink="true">https://alexdln.com/blog/website-performance-basic-checklist</guid><pubDate>Fri, 13 Sep 2024 08:13:00 GMT</pubDate><description><![CDATA[A fast website is something very obvious and simple: the site loads quickly and does not freeze. “If you make people wait 3 seconds, you start losing users” is a rule that probably every web developer has heard. But this rule is only the tip of the iceberg, both when it comes to the reasons customers are lost and when it comes to real outcomes.

This article is a comprehensive collection of information about performance: from the history of the first analysis tools and the reasons they appeared, to modern problems and universal ways to improve a website.]]></description><content:encoded><![CDATA[A fast website is something very obvious and simple: the site loads quickly and does not freeze. “If you make people wait 3 seconds, you start losing users” is a rule that probably every web developer has heard. But this rule is only the tip of the iceberg, both when it comes to the reasons customers are lost and when it comes to real outcomes.

This article is a comprehensive collection of information about performance: from the history of the first analysis tools and the reasons they appeared, to modern problems and universal ways to improve a website.


Performance factors

First, what does a fast website mean? Site speed is a combination of factors. Some of them are easy to measure, including in a “lab” environment, while others work only at the level of the user’s perception.

And perhaps the three‑second rule today can no longer be called the main and only correct one. Most users now have fast enough internet to load even a very heavy page in a couple of seconds. Further optimizations of load speed are done mostly for the remaining 20% of customers. But there are factors that affect 100% of customers.

These are factors of how the site is perceived and how it feels to interact with it. And if you try to identify the main principle of a fast website today, it is this: “the user should not feel the site working.” To achieve that, you need to consider not only site speed metrics, but also the user experience.

User experience

You have probably run into situations where you search for an answer, open a site, start reading, and ads keep loading nonstop on the sides. Or you open a landing page for a tool to see how it is used, and it stutters because of animations. Or you click a button and nothing happens.

These are very common cases that almost every third site is guilty of. But there are less obvious cases too.

Site feedback

Imagine two blogs. In both, you go to the second page of an article list. In the first case, nothing happens. One second, two seconds, you click again. Another second, and here is the long‑awaited navigation to the new page. Next time you do this, you will very likely click again after just a second. After that, repeated clicks will happen immediately. The reason is simple: the interface gives no feedback that the action was successful. A kind of “CTRL+C effect.”

In the second blog, right after the navigation it shows skeleton placeholders for the articles. That same one or two seconds of waiting feels different. The user understands that loading is in progress and the articles are about to appear.

But what happens with the same scenario on a 5G connection?

The first site will load and show new articles after about 200 ms of a blank screen. The second site will immediately show a skeleton, and after 200 ms it will replace the skeletons with the actual articles. During those 200 ms the interface changes several times, with jumps and flicker. So an overly fast response significantly worsened the user experience. This happens with all metrics: improving one can lead to worsening another. Every optimization is analysis, checks, tests, and searching for the best solution for a specific case.

In this case, the optimal solution is to show a loader or skeletons with a small delay, usually around 150 to 300 ms. After a second, you can show an additional message like “This page is taking longer than usual, please wait.” This kind of solution is used, for example, in Jira and Notion. Usually users do not notice these tricks (and that is exactly the key factor of a fast site, as mentioned before).

Another possible solution is a progress bar. In this case a line appears right away showing the loading percentage. The percentage can be symbolic, moving in steps (started the action: 10%, sent the request: 20%, received a response: 90%). You can see this approach, for example, in GitHub.

Showing content

The other side of the same situation is that the user should see something as early as possible. This applies both to the initial page load and to everything that happens after.

For example, if a page has a popup with cards for related tasks, you should not wait until all tasks are loaded and only then open the popup. The user may not even need that list. It is much better to show the popup right away with the data you already have and indicate that the tasks are still loading (using the approaches mentioned above).

Some services show a full-screen loader during the basic page load. LinkedIn does this while the main blocks load, then it shows the page and loads a few more blocks. This can be good for perceived experience, but it can hurt metrics because it causes UI transitions, extra logic, and delays. For example, the LCP metric, which will be discussed later, will be counted when the largest element is loaded, which in this case will happen only after the loader is hidden.

Animations

Animations sit on a very fine line when it comes to user experience. They are a powerful tool for showing beauty, modernity, and a sense of momentum. And it is true that a site without well-crafted states, effects, and animations often cannot compete.

With states and micro-effects, things are simple: a short, smooth response to every interactive user action. Full animations are more complex. An overabundance often leads to the page freezing or stuttering, especially on a user’s first visit (and since animations are used mostly on marketing landing pages, that is a large part of the audience). You need to find a balance in both quantity and complexity. For especially complex animations, a much more efficient solution is often to use video.

There are also cases of deliberately worsening the user experience. For example, full-screen transitions on scroll. Instead of quickly scrolling to the relevant part (pricing, description, the form itself, and so on), the user spins the wheel for 10 seconds and watches animations they do not need.

A similar situation happens with videos or animations that autoplay as you scroll. Users often do not need those videos. Each scroll step should reveal useful information that stays anchored in place.

Image display

Another basic and controversial example is progressive image loading, where an image is shown first in very low quality with blur, and the full version loads in the background. This is how, for example, Cloudflare Mirage or Next.js Image works. This approach is controversial for a few reasons:

So, with the right optimizations, this can improve user experience because the user sees thumbnails quickly. But, all else being equal, it will likely worsen speed metrics, because it clogs the pipeline with extra requests (or increases page weight if those images are inlined) and runs inside client logic.

Another common mistake is setting loading="lazy" on a large image in the first section. It will load after all scripts, sprites, CSS files, other non-lazy images, and so on. But the user should see that image immediately, which means it should be much higher in the queue.

I intentionally do not mention image preloading (meta preload) here. It is usually better to load images via picture with multiple formats. With preload, you either have to pick a preferred format or preload all possible variants (which also harms other metrics).

Late image loading affects metrics, especially LCP. Incorrectly configured images can also worsen CLS. That brings us to the main set of metrics: Web Vitals.

Web Vitals

Web Vitals is a set of metrics that Google considers key to user comfort. Currently, they include:

All of these metrics have different weights. You can see the weight and the potential benefit of improving each one in a special calculator.

The list changes constantly. For example, in 2017 there was First Meaningful Paint instead of FCP, Perceptual Speed Index instead of Speed Index, and Estimated Input Latency (which was later replaced by First Input Delay and is now being replaced by INP).

Web Vitals metrics are part of the Lighthouse analyzer. Lighthouse, in turn, is part of the PageSpeed Insights service, which is part of the Google PageSpeed toolkit. The latter was introduced by Google back in 2010.

Today this is the main set of metrics for analyzing site performance in Google and on the web overall. But in the first years after release, they were not very popular. Everything changed when Google updated its search ranking policy.

Why Web Vitals appeared

If you find a first-generation website today, its contents will usually arrive extremely quickly, even though those sites had no optimizations (not even response compression). For example, netside.net has a total resource weight of 117 KB (114 KB transferred). For comparison, the average Wikipedia page is about 750 KB (250 KB transferred), and react.dev/reference/react is about 2.1 MB (915 KB transferred).

At the same time, both Wikipedia and the React documentation load in roughly the same time as netside on fast connections, but on slow connections they take 2 to 3 times longer for the full content to appear (8 seconds versus 3 seconds).

The internet got much faster, and as a result people largely stopped paying attention to page weight. Today, an apparently simple text page can load tens of megabytes of data. Services that are identical in complexity, with medium-sized HTML, simple styling, and basic logic, can still differ dramatically in size and metrics.

While some teams built huge products and optimized them to load in a couple of seconds, others simply scaled sites to tens or hundreds of megabytes. jQuery, Angular, and later Vue and React accelerated this trend to extreme levels.

As a result, users increasingly landed on sites that took longer than 3 seconds to load. Without waiting for the content, they would go back to search and try other sites. At some point Google noticed this. After being disappointed a few times, users could close the search engine or switch to another one (where results were ranked differently).

In 2020, the Web Vitals metric set was fully introduced, Lighthouse was rebranded, and a new era of optimization began. Metrics returned to the web with renewed force.

Impact of metrics on business

In 2020, not only the metrics themselves were introduced, but also updated tools for analyzing and optimizing them. Lighthouse did not just show a page’s results. It provided clear, concrete metrics, instructions, and documentation for possible optimizations.

Today, metrics play an important role in search results. Slow sites risk more than just the three-second rule. They can also lose organic traffic as they drop in rankings. But why?

Google, of course, does not disclose all ranking rules, and it often says it does not know them in full. Still, it is known that a position in search results is the sum of many factors. Some factors are higher priority than others. Content quality, relevance to the query, and a site’s overall “weight” are among the most important. But Web Vitals metrics are also one of the ranking factors.

At the same time, an even bigger factor is how real users behave. That is why, in Google, many metrics are assessed using real user data rather than purely in a lab. If people visit a site and consistently stay on it (instead of immediately returning to search), that is a good signal. So if a site is slow, it not only loses points on metrics. It also risks losing overall effectiveness. Some sites can have very poor performance metrics while still being highly effective. Such sites can rank above sites with good metrics but worse user effectiveness, because this factor matters more.

Overall, this started an era of chasing metrics and endless attempts to trick Google. For example, for a while it was popular to show crawlers a screenshot of the first screen. Agencies often used this approach. The image loaded much faster than all of the site’s resources, so the metrics were immediately counted.

It is possible that approaches like this are exactly why performance is now analyzed using real user data. Today, if Google sees suspicious differences between what bots and real users receive, it can react harshly. This often means removing pages from search results and blocking them in Google’s services.

Of course, slow sites also affect the business itself. Tasks take longer to complete, navigation is slower, and various unpleasant effects appear. Overall, it is simply less comfortable for people to use such a site.

A few more resources on how optimizing metrics can help a business:

How The Economic Times met Core Web Vitals thresholds and increased overall bounce rate by 43%

How redBus improved Interaction to Next Paint (INP) and increased sales by 7%

How Renault reduced bounce rate and improved conversions by measuring and optimizing Largest Contentful Paint

BBC found that they lose an additional 10% of users for every extra second a website takes to load

Vodafone: improving LCP by 31% increased sales by 8%

Rakuten 24: investing in Core Web Vitals increased revenue per visitor by 53.37% and conversion rate by 33.13%

Metrics for dynamic sites

Over time, a number of single-page application (SPA) libraries and frameworks started being used to build marketing websites. As a result, a large share of content ended up outside the reach of search engines. Google reacted again. Googlebot began not only scanning sites immediately, but also letting dynamic content render. This meant apps could fully compete for rankings.

This means the question of metrics also affected these tools. Frameworks started competing not only on simplicity and features, but also on speed. But they did not do a great job at it. Splitting logic, lazy loading, style optimization, and deferred rendering can, in the case of marketing landing pages, only get you closer to native sites.

An SPA can apply many internal optimizations only after all client logic has loaded. That makes sense for web applications, because it brings a large benefit for long browsing sessions and repeat visits. But for landing pages, optimizations for a user’s first visit are often much more important.

Today, React Server Components are changing this. Combined with many built-in optimizations in React and Next.js, they make it possible to get comparable metrics out of the box and provide a lot of potential for marketing use cases. But that is a topic for separate articles.

Data collection and analysis

Overall, metrics are important, metrics are useful, and metrics need to be improved. But before improving them, you need to collect data about them.

Lighthouse

The first tool on this list is the already mentioned Lighthouse.

Lighthouse is the main tool for working with Web Vitals in lab conditions. It lets you collect the metrics themselves, look at additional factors (such as accessibility or best practices), find weak spots, and review recommendations for improvements.

You can test a site in Lighthouse both locally and via the PageSpeed Insights service, on the recommended device.

If you need a quick check for the main issues, it is enough to run the local version (Developer Tools → Lighthouse → Analyze Page Load). If you need a full analysis, trend tracking, and more stable results, use PageSpeed Insights. The service will show all the needed information, but specifically on the recommended device. Also, if the site is popular enough, the service will show average user metrics.

Developer tools

Besides Lighthouse, DevTools has a number of other useful tabs, but the most helpful for optimization are probably Performance and Network.

Performance

Performance lets you record a page profile, including resource loading, script execution, and rendering.

In this tab you can immediately see FCP and LCP. You can check request queues, analyze long scripts and animations, see delays, and find overloaded moments.

Network

In the Network tab you can see all network requests, including loading resources (images, scripts, styles) and REST requests. This helps identify which resources slow down loading.

Here you can see the load time of each resource (including request latency), the request waterfall, queues, drops, request order, size, and caching.

Search Console

You can get a bit of additional information about real user metrics from Search Console.

Google Analytics

This may be an unexpected service on the list, but it is one of the most effective for analyzing real user results (Real User Monitoring, or RUM). GA can collect any data by simply enabling the needed options in settings, sending additional metrics manually, or connecting a ready-made preset in GTM (Google Tag Manager). For Web Vitals there is also a ready-made preset. Once connected, GA will start receiving all the metrics data you need.

All that is left is to build charts and start tracking trends.

Third-party services

DebugBear

DebugBear lets you automatically collect and analyze metrics through several tools, including Lighthouse, run multi-run analysis (to get more accurate metrics), and track and compare metrics and requests over long periods of time.

SpeedCurve

SpeedCurve lets you analyze real user metrics, test from different browsers and locations, check performance in CI/CD, analyze INP, and more.

GTmetrix

GTmetrix lets you analyze under different conditions, replay video recordings, test different scenarios, and analyze request waterfalls.

Site24x7

Site24x7 lets you monitor all server components, from site load speed to network and Kubernetes monitoring, from more than 130 locations worldwide.

Pingdom

Pingdom lets you monitor a service, including availability and response time analysis. You can configure outage alerts and analyze from many locations. It can also monitor specific user scenarios.

Uptrends

Uptrends offers availability, performance, and infrastructure monitoring. It includes analysis from different locations, detailed reports, scenario testing, and API monitoring.

WebPageTest

WebPageTest lets you analyze a site and then validate optimizations using dynamic tests. The service can also be used to pre-check planned changes (for example, adding third-party analytics).

Metrics optimization

So, we have learned what metrics exist and how to analyze them. That means it is time to move on to optimizing them.

Image optimization

This is the improvement already mentioned in the context of user experience. Images often take up a significant share, not only of the page content, but also of its weight, meaning transferred traffic. That means it is worth reducing their weight. There are several approaches.

Image compression

The first and most obvious solution is to compress the image. There are many services and tools that can reduce image size, sometimes significantly, without a visible loss of quality.

Overall, there are many solutions, and that deserves a separate article. In general, there are a few paths:

Resizing images

If an image is used in a 400x200 block, it does not need to be 1920x1080. No screen will show that level of detail in such a small block. It is better to use an image that is no more than 2x the display size. This margin is useful, for example, for retina displays.

- <img style="width:200px" src="/example_1920x1080.png"></img>
+ <img style="width:200px" src="/example_400x225.png"></img>


Lazy loading

The idea of lazy loading is to defer downloading files until they are needed. For images, that is until they are visible.

First, it is worth using the native browser attribute loading="lazy". It gives the desired result on fast connections. On slow connections, all images can still be queued immediately, so that when scrolling the page the user does not encounter blank spaces. But those images go to the bottom of the pipeline, letting the user load the most important files first, and only then load images.

<img style="width:200px" src="/example_400x225.png" loading="lazy"></img>


It is important that this attribute should not be used for critical images, such as a large image in the first section, meaning LCP.

Using modern formats

Among modern formats, webp and avif are worth mentioning. Both have good support today, and it is likely broader than what your website supports. At the same time, it is hard to say which format is optimal. For each image, the compression ratio and quality trade-offs differ. For example, for illustrations, png can often weigh less than a modern webp. For text, avif works well. For raster images with transparency, webp often works well.

On average, choosing the right format can reduce image weight by more than 50%.

Serving different images

If someone visits from a mobile device, they do not need images intended for desktop, neither in size nor in quality. On mobile devices there is much less space for an image and often lower display capabilities.

The most convenient way to serve different images depending on conditions is to use the <picture> tag. It lets you specify which image to show for which screen size and type.

<picture>
  <source srcset="/example_800x450.webp" media="(min-width: 1080px)" type="image/webp" />
  <source srcset="/example_800x450.png" media="(min-width: 1080px)" />
  <source srcset="/example_400x225.webp" type="image/webp" />
  <img src="/example_400x225.png" loading="lazy" />
</picture>


A nice bonus is that picture can include multiple formats. For example, webp as the primary one and png as a fallback for the rare case when webp is not supported or is disabled for some reason.

Serving different images in CSS

For CSS background images, handling screen size is simple: define media queries and set background-image to the right variant. But format selection is harder. For these cases, you can add a blocking script at the start of the page that adds the required classes to the whole document, such as .img-webp and .img-2x, and then set images in CSS based on those classes.

.example-block {
  background-image: url("/example_400x225.png");
}

.webp .example-block {
  background-image: url("/example_400x225.webp");
}


Sprites

For large images, the main goal is to reduce weight. For small images, the main goal is to reduce the number of requests. Small images and icons often weigh so little that transfer time is not the main issue. Instead, much of the time goes into connection start and subsequent delays. At the same time, a page often contains dozens or hundreds of icons. All of them can overload the pipeline and prevent important information from loading.

The best solution in this case is to combine all icons into a single file, a sprite. This can be an SVG or a PNG.

With SVG, an svg sprite is created as a single svg file, where each icon is defined as a separate <symbol /> tag.

<svg xmlns="<http://www.w3.org/2000/svg>">
  <symbol viewBox="0 0 24 24" id="check">
    <path d="M20 6L9 17L4 12" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
  </symbol>
  <symbol viewBox="0 0 24 24" id="close">
    <path d="M17 7L7 17M7 7L17 17" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
  </symbol>
</svg>


Then it is used via the use tag.

<div>
  <svg width="20" height="20">
    <use href="/sprite.svg#check"></use>
  </svg>
</div>


Critical icons, if any, can be split into an additional sprite and inlined into the HTML.

With PNG, or another raster format, you create classes that crop a specific icon out of a larger image, a CSS sprite. This approach is often used by maps.

.icon {
  background: url("/sprite.png");
  height: 20px;
  width: 20px;
}
.icon-check {
  background-position: 20px 0px;
}
.icon-close {
  background-position: 40px 0px;
}


Inlining images

Another possible optimization for small images is to inline them. This is useful when small images are used once and only on a single page. If the image repeats, it is better to include it in a sprite. This is usually implemented via additional loaders and special file naming.

Video optimization

The main improvement for video is deferred loading. But unlike images, it often follows a more progressive path:

In some sections, YouTube shows GIF previews on hover, and on click navigates to the full player.

Static compression

A key optimization is configuring compression. Compression is especially useful for JS, HTML, and CSS files. It can reduce transfer size by around 50% to 70%, more or less depending on compression level, file size, and content.

You can compress files during the build, generating brotli or gzip versions. It is better to prefer brotli, because it is about 20% more effective than gzip.

More often, compression is delegated to a CDN or the server, in many cases nginx. But with incorrect configuration, such as missing caching, this can increase server response time.

Style optimization

How styles are delivered

There are three ways to deliver styles to a page:

Inline inflates the document, and that is critical, so it is not suitable as the main approach in most cases. External allows caching. But if every page has unique styles and the visitor only comes once, that benefit disappears. Embedded is most effective when each page has unique styles and you use streaming.

Optimizing style application

When rendering a page, changing DOM elements, or changing an element state, such as hover or focus, the browser recalculates styles. In some situations, this can cause visible slowdowns.

As basic optimizations, remove unused styles and move rarely used media into separate bundles with conditions.

<link
  rel="stylesheet"
  href="mobile.css"
  media="screen and (max-width: 600px)"
/>


Optimizing selectors

This is a less obvious optimization. Evaluating some selectors can be an expensive operation. For example, if you have a selector like body.test-a .block[data-test=a], when the body class changes the browser needs to match all .block elements on the page.

More about optimizing and debugging selectors can be found here: https://developer.chrome.com/docs/devtools/performance/selector-stats?hl=en#analyze-stats.

Deferred rendering

In addition to lazy loading sections or deferring rendering via JavaScript, you can use a standard CSS mechanism. The content-visibility property lets you tell the browser an element's visibility on the page.

By default, all elements are rendered immediately and are fully available. But if you override this property, the browser can optimize rendering. With auto, the element will be prepared and rendered as needed. With hidden, it will remain hidden until you change the value.

.map {
  content-visibility: auto;
  contain-intrinsic-size: 1000px;
}


The contain-intrinsic-size property lets you reserve height for the element. This is similar to height for an img with loading="lazy".

It is important to note that if you set hidden, the element will not be available to bots or for in-page search.

Font optimization

Fonts are often a weak spot for a website because they are both critical and fairly heavy resources.

Optimizing the glyph set

First, review the fonts used on the page. Are all font families, weights, and variants really needed. After keeping only what is necessary, or switching to variable fonts, you can move on to optimizing how fonts are loaded.

@font-face {
  font-family: "Inter";
  src: url("Inter-Regular-webfont.woff2") format("woff2");
  unicode-range: U+0025-00FF;
}


Loading strategy

There are two main approaches to font loading:

Which approach to choose depends on the situation, since both have pros and cons.

In some cases, teams also use background font loading and apply those fonts only on subsequent visits.

Preloading

Another possible improvement is preloading fonts. This moves the font earlier in the loading pipeline than other resources.

<link rel="preload" href="/Inter-regular.woff2" as="font" type="font/woff2" crossorigin>


This step, like many others, can have negative side effects. For example, secondary fonts can take up the network pipeline, while the hero image from the first section ends up at the bottom of the initial request queue (so it is important to limit the number of fonts and preload only critical ones).

Faster server response

Because resources load like a train, one after another, server latency has a compounding effect. For example, if server latency is 60 ms and the hero image from the first section is loaded via JavaScript, it will load 120 ms later than it would with 20 ms latency (40 ms latency for the HTML, then 40 ms for the script, then 40 ms for the image).

At the same time, latency stretches the entire pipeline and, as mentioned earlier, this is especially important for small files.

To reduce latency, the following can help:

Caching

The best way to optimize requests is to avoid requests. One way to do that is caching.

When all parts of the page are already downloaded and stored, the browser only needs to render the page (which takes relatively little time). Caching can dramatically speed up repeat page loads and navigation to subsequent pages (since some resources are shared).

Caching usually relies on:

The Cache-Control header. It lets you set how long the browser can cache a response. This should only be used together with a hash in the filename. Otherwise, the user may be unable to get updated resources (which can lead to unexpected artifacts).

The Etag and Last-Modified headers. They let browsers detect whether content has changed and only download the file if it has. Otherwise, the browser uses the cached version. This approach requires an additional request to the server, which adds latency. For that reason, Cache-Control should be the default choice, and Etag or Last-Modified should be used for special cases.

Other

Reducing layout shifts

Another important improvement that affects how fast a site feels is reducing layout shifts (CLS). They can happen, for example, when a user loads a page [and may have even started using it], and then additional elements load and shift the entire UI. These can be banners, images, dynamic widgets, and so on.

For most cases, the main solution is to reserve the most accurate amount of space for the block. For example, for a banner above the header you can fix a minimum height of 60 px on desktop and 120 px on mobile. For images, use built-in mechanisms:

Avoiding render-blocking elements

Some elements can completely stop rendering and further processing of the page. These are called render-blocking. Until they load and or execute, loading of required resources and showing the page content is blocked.

These elements commonly include:

Postscript

Beyond the list above, there are dozens or hundreds of other optimizations. Every framework, every library, and every integration has its own issues and its own impact on metrics.

As mentioned earlier, every optimization is analysis, checks, tests, and searching for the best solution for a specific case. Validate ideas and continuously track trends. The best thing you can do is fix losses in time.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreidbv74zo2mpi46r3zo5kp42lgsqfyk6dbi4zjzbebcbb2jgohwjzm@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Vercel Edge — what is it and how is it]]></title><link>https://alexdln.com/blog/vercel-edge</link><guid isPermaLink="true">https://alexdln.com/blog/vercel-edge</guid><pubDate>Sun, 14 Jul 2024 19:26:00 GMT</pubDate><description><![CDATA[Edge runtime. One of the main functionality of Vercel — the company that developed and maintains next.js. However, its influence on the edge runtime has gone far beyond its frameworks and utilities. The edge runtime works in the recently acquired by Vercel Svelte, in nuxt, and in more than 30 other frontend frameworks. This article will focus on the edge runtime — what it is, how it is used in Vercel, what features it adds to next.js, expected changes and what solutions I made to expand these features.]]></description><content:encoded><![CDATA[Edge runtime. One of the main functionality of Vercel — the company that developed and maintains next.js. However, its influence on the edge runtime has gone far beyond its frameworks and utilities. The edge runtime works in the recently acquired by Vercel Svelte, in nuxt, and in more than 30 other frontend frameworks. This article will focus on the edge runtime — what it is, how it is used in Vercel, what features it adds to next.js, expected changes and what solutions I made to expand these features.

Vercel Edge Network

Simply put, the edge runtime is a content delivery network (CDN/distributed infrastructure), i.e., multiple points around the world. Thus, the user interacts not with a single server (which may be located in the company’s office on the other side of the world), but with the nearest network point.

At the same time, these are not copies of the application, but separate functionalities that can work between the client and the server. In other words, these are mini-servers with their own features (which will be described later).

This system allows users to access not your far server right away, but a nearby point. Decisions on A/B tests are made at this point, authorization checks are performed, requests are cached, errors are returned, and much more. After this, if necessary, the request will go to the server for the required information. Otherwise, the user receives an error or, for example, a redirect in the shortest time.

Of course, this concept itself is not Vercel’s achievement. CloudFlare, Google Cloud CDN, and many other solutions can do this as well. However, Vercel, with its influence on frameworks, has taken this to a new level, deploying not just an intermediate router at the CDN level but creating mini-applications capable of even rendering pages at the nearest point to the user. And most importantly, this can be done simply by adding familiar JS files to the project.

Edge runtime in next.js

In next.js, perhaps the main functionality of this environment is the middleware file. Any segment (API or page) can also be executed in the edge runtime. But before describing them, a little bit about the next.js server.

Next.js is a full-stack framework. That is, it contains both the client application and the server. When running next.js (next start) - it is the server that starts and is responsible for serving pages, working with the API, caching, rewrites, etc.

It all works in the following order:



When it is determined that the current request reaches is the segment (not, for example, a redirect), its processing begins (this is either the return of a statically assembled segment, reading from the cache, or its execution and return of the result).

In Vercel, likely, this entire cycle can run in the edge runtime. However, points 3, 5, and 7 are particularly interesting here.

The middleware in its basic implementation looks like this:

import { NextResponse, type NextRequest } from 'next/server';

export function middleware(request: NextRequest) {
  return NextResponse.redirect(new URL('/home', request.url));
}

In it, for example, you can:

You can read more about the areas of application in the next.js middleware documentation.

The same can be done in segments (i.e., API and pages). To make a segment work in the edge runtime, you need to export from the segment file:

export const runtime = 'edge';

Thus, the segment will be executed in the edge runtime, not on the server itself.

However, it is important to make an important caveat here. Everything described above is not a full-fledged edge runtime by itself. In Edge Network, this will be distributed only when the service is deployed in Vercel.

Also, in addition to all these capabilities, the edge runtime has several limitations. For example, despite the fact that when running the application outside of Vercel, the edge runtime is part of the server — it will not be possible to interact with this server. And this is done because it was developed specifically for the Vercel Edge Network.

Edge runtime concept in Vercel

As mentioned, the edge runtime can be called mini-applications. They are mini because they run on node.js V8 (which runs, for example, Google Chrome and Electron). This is their key detail, on which not only the features of the previous section depend but also the restrictions.

Namely, in the edge runtime, you cannot:

The full list of supported APIs and restrictions can be found on the next.js documentation page.

Thus, Vercel Edge Network can be responsible for, for example:

The edge runtime acts as the first stage of segment processing and is most effective in situations where all processing can take place inside the edge container. For example, for redirects or returning cached data. The entire processing process in Vercel usually works in the following order:

After the build, Vercel sends the new edge runtime code (which now compiles to machine code) to the servers, and they immediately start working with the new code.

Vercel itself uses the edge runtime for all applications and all requests. That is, after connecting the domain, Vercel immediately configures that this domain is also available on these points around the world. The next time a user visits the page, the provider will ask the network where this domain is located, and in response, it will get available locations, choose the nearest point, and go to it.

These Edge points always have caching logic, and if there are rewrites, redirects, middleware, or segments in the edge runtime in the project — after the build, it will send all of this to the edge servers.

Then the edge runtime will process it: check rewrites and redirects -> pass through middleware -> check if it is cached -> if the segment is in the edge runtime — execute it there, if not — send the request to the original server (but Vercel does not document these orders and internals of the Edge runtime anywhere, but this is how I see it).

In summary, it is beneficial to use the Edge runtime when all processing can be done within the Edge environment (The request will be Client -> Edge). If you need to access the main server (for example, a database connected within the project, or to read files for some reason) — it is not advantageous. The request will still be Client -> Edge -> Server. And since you still need to access the server — it is better to do all the processing there — it has all the cache, the database is nearby, the whole system is nearby, and overall, it has more capabilities.

Expected changes in the edge runtime

Despite the fact that the edge runtime is one of the key features of Vercel as a hosting, the team is actively revising it. Not only its application but also its necessity as a whole. Recently, Vercel VP Lee Robinson in his tweet shared that Vercel [as a company] stopped using the edge runtime in all its services and returned to the nodejs runtime. The team also expects that the experimental partial pre-render (PPR) will be so effective that edge runtime generation will lose its value.

And it was PPR along with advanced caching that pushed the edge runtime into the background. Previously, the entire page was rendered either on the server or in the edge runtime. The edge runtime won precisely because of its closer location. Now, pages are mostly pre-generated. Then, upon request, individual dynamic parts are rendered and cached. The cache, in turn, is unique for each point in the edge runtime, whereas on the server it is the same for all users.

And, of course, the server has access to the environment, database, and file system. Therefore, if the page needs this data, the nodejs runtime wins significantly (gathering everything in one environment is faster than making requests to the server from the edge environment each time).

Vercel is likely to introduce new priorities in its pricing, restructuring them around partial pre-render. Perhaps with these changes, tweets with bills of tens of thousands of dollars will become fewer (but this is not certain).

In addition, the Next.js team recently shared a tweet about middleware revision. It is very likely that, like the segments, it will be given an execution environment choice. Again, considering that outside Vercel middleware works as part of the server, this is a very logical decision. It is also possible that with these changes, a separate middleware for API routes will be added.

Expanding the Edge runtime

I am the author of several packages for next.js nimpl.dev. I have already mentioned getters with information about the current page in the article “Next.js App Router. Experience of use. Path to the future or a wrong turn”, translation library in “More libraries to the god of libraries or how I rethought i18n [next.js v14]”, caching packages in “Caching in next.js. Gift or curse”. But in this family, there are also packages specifically built for the edge runtime — router and middleware-chain.

@nimpl/router

As mentioned, the edge runtime works best if it can handle the entire request in a self-contained mini-application. In all other cases, this is an unnecessary step since the request will still go to the server but via a longer path.

One of these tasks is routing. Routing also includes rewrites, redirects, basePath, and i18n from next.config.js.

Their main problem is that they are set only once — in the configuration file — for the entire application, and also, i18n is full of bugs. Therefore, including in the App Router, there is no information about the i18n option, and the documentation recommends using middleware for this case. But such a separation means that redirects from the config and i18n routing from middleware are processed separately. This can cause double redirects (first a redirect from the config will be performed, then a redirect from the middleware) and various unexpected artifacts can appear.

To avoid this, all this functionality should be gathered in one place. And, as the documentation recommends for i18n, this place should be middleware.

import { createMiddleware } from '@nimpl/router';

export const middleware = createMiddleware({
    redirects: [
        {
            source: '/old',
            destination: '/',
            permanent: false,
        },
    ],
    rewrites: [
        {
            source: '/home',
            destination: '/',
            locale: false,
        },
    ],
    basePath: '/doc',
    i18n: {
        defaultLocale: 'en',
        locales: ['en', 'de'],
    },
});

Familiar Next.js redirects, rewrites, basePath, and i18n settings but at the edge runtime level. Documentation for the @nimpl/router package.

@nimpl/middleware-chain

Working with ready-made solutions or creating my own, time and time again I encountered the problem of combining them in one middleware. That is when you need to connect two or more ready-made middleware to one project.

The problem is that middleware in next.js is not the same as in express or koa — it immediately returns the final result. Therefore, each package just creates the final middleware. For example, in next-intl, it looks like this:

import createMiddleware from 'next-intl/middleware';

export default createMiddleware({
  locales: ['en', 'de'],
  defaultLocale: 'en',
});

I am not the first to encounter this problem, and ready-made solutions can be found on npm. They all work through their own APIs — made in the style of express or in their own vision. They are useful, well-implemented, and convenient. But only in cases where you can update each used middleware.

However, there are many situations where you need to add already existing solutions. Usually, in the issues of these solutions, you can find “add support for adding package chain A”, “work with package chain B”. It is for such situations that @nimpl/middleware-chain is created.

This package allows you to create a chain of native next.js middleware without any modifications (that is, you can add any ready-made middleware to the chain).

import { default as authMiddleware } from "next-auth/middleware";
import createMiddleware from "next-intl/middleware";
import { chain } from "@nimpl/middleware-chain";

const intlMiddleware = createMiddleware({
    locales: ["en", "dk"],
    defaultLocale: "en",
});

export default chain([
    intlMiddleware,
    authMiddleware,
]);

The chain processes each middleware sequentially. During processing, all modifications are collected until the chain is completed or until any element in the chain returns FinalNextResponse.

export default chain([
    intlMiddleware,
    (req) => {
        if (req.summary.type === "redirect") return FinalNextResponse.next();
    },
    authMiddleware,
]);

This is not Koa or Express, this is a package for next.js, in its unique style and format of its API. Documentation for the @nimpl/middleware-chain package.

And to end, let me leave a few links here.My Medium with other useful articles | nimpl.dev with package documentation | github with star buttonThe dot map used as the background for images at the beginning of the article is made by mocrovector from freepik.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreia3ibwsbuj273zv7pfv457wbncd5oegxygobbnyrl7aprccrebsne@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Measure twice and release once. A/B tests of static sites]]></title><link>https://alexdln.com/blog/ab-tests</link><guid isPermaLink="true">https://alexdln.com/blog/ab-tests</guid><pubDate>Tue, 04 Jun 2024 18:31:00 GMT</pubDate><description><![CDATA[A release starts with an idea. When that perfect idea comes in the brainstorming, the idea that will appeal to all users and attract new customers. The idea is presented to a team of managers, marketers and is unconditionally supported by everyone.]]></description><content:encoded><![CDATA[A release starts with an idea. When that perfect idea comes in the brainstorming, the idea that will appeal to all users and attract new customers. The idea is presented to a team of managers, marketers and is unconditionally supported by everyone.

The technical task is elaborated and the task is given to the developers. They grumble, asking to make an unnecessary updates, set clearly inflated deadlines, but eventually do the task. The task is testing and going to the end users. At this point, the life cycle of the idea is complete. Now all that remains is to wait for a mass of fresh analytics and celebrate…

Some losses are allowed in the first week — there is little data, users are getting used to the updates, other improvements are being rolled out, a high influence of outliers. However, by the second week, it becomes evident that the idea not only did not attract new clients but also made some users use this product less.

The idea, which has gone through dozens of discussions and received hundreds of enthusiastic comments, has failed.

The Hypothesis Failed

The introduction turned out to be voluminous. But I wanted to start this article with the long journey of the hypothesis. Because it broke at the very beginning — it was supported only by people similar to its author. But these people are not the most suitable target audience, and maybe even their rare exceptions.

That’s why when changing existing functionality, they do not rely on the thoughts of the author and the team. To make the right choice, they conduct research, analyze existing product and market analytics, compare with competitors. But often behind all these methods goes the only reliable way to test the hypothesis on the business audience. This (attention!) — is to test it specifically on the business audience.

But not on all of it. This method is called A/B testing. And all the further narration will be devoted to it.

A/B Testing

As we found out above — A/B test is testing a hypothesis on the very audience of the business. This test is due to the comparison of one functionality (option A) and another (option B).

Sometimes A/B tests are conducted alternately — that is, they first measure option A, and then, the next week they measure option B. This variant will not be described in the article because it does not represent anything interesting in technical terms (and here will be nothing about data collection and analysis).A/B testing can take place for checking changes — in which case option A remains the current functionality, as well as for checking several implementations of a new idea — in this case, both options contain new functionality and they are compared relative to each other.

Despite the name, there can be any number of options. The main thing is that the audience allows it. That is, it should be possible to collect enough data for each option, excluding outliers and interference.

So, suppose a decision is made to make a critical change to the website or application. At this moment, the seriousness of this change is assessed and a decision is made to implement it through an A/B test. At the same time, depending on the risks, they decide how to distribute the traffic.

Often the test starts with showing the new option only to 10% of users. Then, if the changes did not lead to a sharp deterioration in metrics on these 10% — it is extended to half of the users, so that the comparison would be full-fledged. Based on the results of this test, make a decision — to leave the new option or return the previous one.

At the same time, of course, according to the results of testing, it is possible to return the idea for revision and then launch the updated test. This can be repeated dozens of times until the necessary change leads to business metrics growth.

A/B Test Rules

A/B Test Scheme

Now, having sorted out what, why, and how is conducted, we can finally move on to the most interesting — the technical part.

And it is worth starting with the basic scheme of the application’s work:

Client — server — client

A very simple communication scheme. The client addressed to the necessary address, the server processed this request and response to the client.

With the advent of A/B tests, this scheme begins to work a little differently. Now, doing identical requests at the same time and under the same conditions, different answers are expected — those very option A or option B.

In practice, this is usually performed by a layer — either at the CDN level, a regular middleware on the server, or other intermediate tools, such as nginx (module for conducting A/B tests in nginx). Further, for the simplicity of the story, just middleware will be used.

In fact, A/B tests can be conducted entirely on the client side. This is how Google Optimizer worked (but in September 2023 it was deactivated). The main problem with this approach was that the user who got option b was redirected to another page. This, in turn, made option B less comfortable for the user and gave out the testing.

This approach can be schematically described like this:

Implementation of A/B tests

Below will be described the solution on next.js, but it can be repeated on any other technology that can change cookies and make rewrites (or return a specific page).

In next.js, this is done by middleware, which runs in the so-called edge-runtime, i.e., at the CDN level. In fact, outside of Vercel (a platform for deploying applications, owning next.js), it’s just a part of the server that works before processing routes.

The first and simplest method of testing is to show one of the options without any conditions:

import { NextResponse, NextRequest } from 'next/server'

export function middleware(request: NextRequest) {
    if (request.nextUrl.pathname === '/home') {
        if (rollVariant() === 1) {
            return NextResponse.rewrite(new URL('/home-animated', request.url));
        } else {
            return NextResponse.rewrite(new URL('/home', request.url));
        }
    }
}

The user has entered the /home page, in the middleware a random option is selected. If the user gets option B — the home-animated page is returned, otherwise the standard home.

It is more convenient to make the options of the interface test separate pages — a new option — a new page.

root
--app
----about
------page.tsx
----home
------page.tsx
----home-animated
------page.tsx

How to choose which option to show to the user? Just roll the dice! If less than half — variant a, otherwise — variant b.

const rollVariant = () => Math.random() < 0.5 ? 1 : 0;

Now, depending on the rolled value, the user will receive from the server either a standard page or home-animated. For the same time and invisibly for the user.

However, with each entry, the user will get a random option. To prevent this from happening, you can write to the database that the client has become a participant in the A/B test. In the case of anonymous tests, test information can be saved in cookies and read from them in the future.

So, if the client already has a cookie recorded — you can skip the steps with the request check and option selection, and immediately issue the necessary page.

import { NextResponse, NextRequest } from 'next/server'

export function middleware(request: NextRequest) {
    if (request.nextUrl.pathname === '/home') {
        const prevVariant = request.cookies.get('ab_variant');
        const variant = prevVariant ?? rollVariant();
        let next: NextResponse;
        if (variant === 1) {
            next = NextResponse.rewrite(new URL('/home-animated', request.url));
        } else {
            next = NextResponse.rewrite(new URL('/home', request.url));
        }
        next.cookies.set('ab_variant', variant.toString());
        return next;
    }
}

Of course, these data need to be analyzed. Here are two options — send data from the server, in parallel with issuing the result to the user, or already on the client, having previously transferred the test results from the server. For the latter, you can use the previously created cookies.

Further, it may be necessary to launch A/B tests only on a specific group. This can be a certain share of users, users of specific browsers, users from specific companies, or anything else.

That is, it is necessary to check the user for a match and depending on the result, either include him in the test or not:

import { NextResponse, NextRequest } from 'next/server'

export function middleware(request: NextRequest) {
    if (request.nextUrl.pathname === '/home' && request.nextUrl.searchParams.has('utm_campaign')) {
        // ...
    }
}

Also, it may be necessary for only new users to participate. But, formally this is the same task as described above. This is a group of users who were not on the site before. In the case of anonymous users, it can be determined, for example, by the absence of cookies of the test, acceptance of policies, or analytics.

Of course, one test will not be enough and it will be necessary to run dozens, if not hundreds, of tests in parallel. The same logic will be used for this, but it will now be checked according to an array of instructions of the launched tests until the first suitable one.

Of course, every company will have different conditions, different requirements, and different orders. The basic example described above is a possible implementation, from which everyone can decide exactly what is needed and how.

Nevertheless, it was decided to try to implement a universal package for conducting A/B tests in next.js — @nimpl/ab-tests.

@nimpl/ab-tests

The first thing to note is that the package meets all of the above, including all the rules. At the same time, it has a number of pleasant possibilities, executed in a familiar API for next.js developers.

The operation of the package can be described as follows:

The main advantage of the package is the principle of finding a suitable test. Each test may include the keys has and missing. Those familiar with next.js know these keys from working with rewrites and redirects. For example, a test can be described as follows:

{
  id: 'some-id',
  source: '/en-(?<country>de|fr|it)',
  has: [
    {
      type: 'query',
      key: 'ref',
      value: 'utm_(?<ref>moogle|daybook)',
    }
  ],
  variants: [
    {
      weight: 0.5,
      destination: '/en-:country/:ref'
    },
    {
      weight: 0.5,
      destination: '/en-:country/:ref/new'
    }
  ],
}

This test will be performed for all users who come to the page with English locales and the label utm_*. Then the user will see either the base page for this company or a new one.

Each test also contains other keys, such as:

id - the identifier of the test, which will be written in the cookie;

source - another familiar key from next.js - the path on which the test is conducted;

variants - a list of options, which can be any number.

Each variant describes a weight — a weight and a destination (again, a familiar key from next.js). The main rule is that the total weight equals one.

Additional part

The development of the package did not end there. During the work on the package, it was decided to test its operation in several projects. However, adding a simple middleware turned out to be a real adventure. The problem is that the projects already had middleware — one with next-intl, one with next-auth.

Surprisingly, none of the projects had previously had the task of supporting two external middleware (only together with the internal ones). As a result of the search, it was not possible to find any solutions. All existing solutions work due to their own APIs — made in the style of express.js or even in their own vision. They are useful, well implemented, and convenient. But only in those cases when you can update every used middleware for them.

The situation here is quite different. It is necessary for each middleware to work as an original middleware from next.js. In general, another new solution was needed. I took it upon myself.

So @nimpl/middleware-chain appeared:

import { default as authMiddleware } from "next-auth/middleware";
import createMiddleware from "next-intl/middleware";
import { chain } from "@nimpl/middleware-chain";

const intlMiddleware = createMiddleware({
    locales: ["en", "dk"],
    defaultLocale: "en",
});
export default chain([
    intlMiddleware,
    authMiddleware,
]);

A small and neat insert.

You can check out these and other packages for next.js at nimpl.dev.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreifnhg5ke3ha62wt4wh7jcwycqtylv4k2h7uzrupp24magkgncgtxm@png" type="image/jpeg" /></item>
		<item><title><![CDATA[React Conf 2024. React v19 RC]]></title><link>https://alexdln.com/blog/react-conf-2024</link><guid isPermaLink="true">https://alexdln.com/blog/react-conf-2024</guid><pubDate>Thu, 16 May 2024 19:13:00 GMT</pubDate><description><![CDATA[The first day of React.js Conf just concluded. This much-anticipated conference took place almost 3 years after the previous one. The React updates were just as eagerly awaited. The conference began with these updates and this article will be dedicated to them. And yes, as you saw from the preview — version 19 has moved into the release candidate status. The full release is promised within two weeks.]]></description><content:encoded><![CDATA[The first day of React.js Conf just concluded. This much-anticipated conference took place almost 3 years after the previous one. The React updates were just as eagerly awaited. The conference began with these updates and this article will be dedicated to them. And yes, as you saw from the preview — version 19 has moved into the release candidate status. The full release is promised within two weeks.

Overall, as a next.js developer, most of it was familiar to me. Dozens of articles on the hub have already talked about almost every part of this update, and I partially touched on the updates introduced in next.js.

It can be said that the main directions of this update were achieving “High UX at high DX”. Maximum performance with maximally simple code. At the same time, there was almost no mention of server components in part of the updates, only indirectly. And so, let’s move on to the conference itself.

As usual in such conferences, everything starts with a description of growth. React downloads reached One Billion per year. The growth of the tool is inevitably linked to the growth of the community. Therefore, Stackoverflow statistics were also shown — 40% of developers use react in web development, 36% are learning it.

Also of interest, React functionality has become possible to a greater extent only within frameworks, so now React.js has started recommending specific ones. The slide showed remix, redwoodjs, next.js, and expo. Interestingly, there is no react router in this list.

Yes! React Router can now be added to this list. The first conference report was about him from Ryan Florence. Now with react router, you can not only do SPA but also SSR and SSG. This is now possible in conjunction with Vite. Hooks are available for working with data and server components.

But for now, let’s return to the changes in React.js. The problem of coordinating elements and expanding the application was described next. JSX solved the problem of coordinating elements in UI development. Then they added Suspense, which solved the problem of coordinating elements when loading elements (what to do during loading and what to show to the user at this time).

In React 19, the following were also added:

The style loading of the component is now tracked by suspense. That is, you can display the loader not only while the component is rendering, but also while its specific styles are being prepared.

With the advent of server components, React took on more responsibility in terms of server rendering, as a result, hydration takes on even more logic and potential problems. The React.js team improved hydration errors.

In addition to significant changes in working with the construction of a real tree, the logic of interaction with forms has been updated in React.js. First of all, the form component was included in react-dom, which means significant changes over the element. And first of all, this change concerns the change of the “action” attribute — as an alternative to submitting a form through onSubmit or a native attribute.

Adding action itself looks just like onSubmit, but instead of an event, it immediately takes FormData.

Also, for form fields and buttons, the formAction prop has been added, which works in an identical manner.

Perhaps the basic advantage of using action instead of onSubmit is that with client actions, if the user calls form submission immediately (even before loading the form logic) — its submission will be postponed and will be performed as soon as the logic is ready. In server actions, the submission will occur immediately because it does not require client js.

But, in addition to the basic difference, there are significant changes in interaction with form submission — these are new hooks. useOptimistic, useFormStatus and useActionState.

Sam Selikoff shared examples of working with them in his presentation “React unpacked: A Roadmap to React 19”. For example, this is what replacing onSubmit with action + useActionState looks like:

Then you can add optimistic render:

And again, let’s return from the report to the key changes. A relatively small change was shown next, but very, very valuable. In React.js 19, you can pass ref to a functional component as props. Right away. Without forwardRef.

Also, when passing ref to components, you can return a callback on unmount.

The final key change was the React Compiler. An advanced loader with memoization out of the box. Together with him, React will automatically set up memoization in the application. Lauren Tan elaborated on this in her presentation “React Compiler Case Studies”.

So, to understand how to set up memoization, React analyzes the relationships from the place that triggers the rerender to the endpoints:

Based on these relationships, the compiler can imagine a full graph of dependencies:

And then, depending on these connections, set up memoization with the necessary dependencies. In this case, since songs do not change — filteredSongs should remain the same (they will be memoized with a dependency on songs), and if the song is changed by setSong, NowPlaying should be rerendered (it will be memoized with a dependency on song).

“Maximum performance with maximally simple code”.

A great solution, however, it will be interesting to see how memoizations will be set up in practice — where developers should write them, and where it is worth not complicating and leaving this logic to the compiler.You can install the compiler right now on all major frameworks and build systems that support babel. It is already being used in Instagram, Facebook, and Bluesky (the company where Dan Abramov now works).

Also, to increase reliability and quality of compilation, you can install an eslint plugin, which will indicate all problems with code optimization. In general, the plugin can be used independently of the compiler.

npm install eslint-plugin-react-compiler

You can also use a command-line utility that will check the application for possible optimization by the compiler

npx react-compiler-healthcheck

Another innovation was shared by Lydia Hallie — the use function. Yes, it’s not a mistake — it’s not a hook.

The key difference between use and hooks is that use can be used within conditions

Use itself can take either a promise or a context. It’s hard to imagine a situation where you can’t be sure what argument will be passed to use — a promise or context and why not just make two independent functions.

In conclusion, I will note the amazing reports, presentations, examples, and performances in general. The React.js team was really able to show the possibilities of all improvements (next.js team forgive me but they didn’t even come close). Also, from a pleasant difference, I note that the React.js team refused to include in the core the rewrite of the fetch API and rolled back already finished changes.

The general list of changes looks like this:

UPD: Seems like I can leave some advertising here.If you’re using next.js — you might want to check out the solutions from nimpl.tech, you might find some of them useful (like for example the getter getPathname for server components or the package for configuration config set up for all next.js environments)

UPD2: There are still entries here, so I’ll leave another update.I just submitted a PR for the addition of getPageContext functionality in next.js (for non-next.js readers — the huge pain of working with contexts in server components [because there simply isn’t one and there are no alternatives]). Leave a reaction if you’re familiar with this pain.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreigpjvmbimtffzphnqwsmysqeanaev4hsxfuehqcaucfu7rv25klou@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Caching in next.js. Gift or Curse]]></title><link>https://alexdln.com/blog/caching-in-nextjs-gift-or-curse</link><guid isPermaLink="true">https://alexdln.com/blog/caching-in-nextjs-gift-or-curse</guid><pubDate>Tue, 19 Mar 2024 20:19:00 GMT</pubDate><description><![CDATA[The App Router significantly expands the functionality of next.js — partial pre-rendering, templates, parallel and interceptable routes, server components, and much more. However, despite all these improvements, not everyone has decided to switch to the App Router. And there are reasons for that.]]></description><content:encoded><![CDATA[In version 13, the next.js team introduced a new approach to application design — the so-called App Router. In version 14, it was made stable and primary for new applications.

The App Router significantly expands the functionality of next.js — partial pre-rendering, templates, parallel and interceptable routes, server components, and much more. However, despite all these improvements, not everyone has decided to switch to the App Router. And there are reasons for that.

I briefly discussed the advantages and problems of the new router in the article “Next.js App Router. Experience of use. The path to the future or the wrong turn”. Further, the conversation will not be about new abstractions or their features. In fact, the key and most controversial change is caching. This article will explain what, why, and how the most popular frontend framework, Next.js, caches.

What does next.js cache?

On the next.js website, you can find excellent documentation on the caching process. First, a brief overview of the main points from the article.

Any request in next.js triggered through fetch will be memoized and cached. The same will happen with pages and the cache function. How this works under the hood will be discussed in the following sections. The general page building process works as follows:

That is: the user goes to the page, a request for a route is sent to the server, the server starts rendering the route, sending the necessary requests along the way. Then all this is executed and cached.

In addition to the cache in the scheme, there is also memoization. It is needed for recurring requests — so that they are not sent several times, but are subscribed to the first one.

Caching on the server is done using the so-called Data Cache. You can remove data from it by calling the revalidatePath and revalidateTag functions. The first one will update the cache for the page, the second one for the tag specified in the requests.

Data is also cached on the client side — inside the client router.

Not mentioned in the article — next.js also caches rewrites and redirects. That is, if the user was once redirected from the / page to the /login on the server - now he will continue to be redirected there. This will be cached in the client router until the client cache is cleared.

You can clear the cache on the client using router.refresh or by calling revalidatePath and revalidateTag in server actions.

'use server'

import { revalidateTag } from 'next/cache'

export default async function submit() {
  await addPost()
  revalidateTag('posts')
}

Why is caching needed in next.js?

The fetch from next.js is a wrapper over the native node.js fetch. The wrapper is configured to connect with the so-called Data Cache. This is done so that each request can be processed as described in the schemes above. The next.js team is most often criticized by the community for this replacement of the native API.

Later in next.js, the ability to disable caching of a request was added with the cache: "no-store" option. But even with this option, it will continue to be memoized. As a result, one of the key APIs for development has ceased to be controlled by the developer.

Nevertheless, there were reasons for this step. And it is unlikely that the initial reason was optimization. For optimization, it would been enough to create a new function for requests — a separate API, of which there are hundreds in next.js.

I followed a similar path when developing the next-translation package (as I wrote in a previous article). Then an interesting problem arose — too many requests were going to the server (not triggered through fetch). Looking into the reasons and reading the next.js source code, it became clear that the application is now being built in several independent threads. Strange, why they didn’t talk about this in the latest releases. Each thread lives as an independent process, as a result, it was not possible to make normal caching for the entire application inside the package.

The same problem arose before the next.js team — each integration, each package, each user now began to send several times more requests, and the previously configured caching systems stopped working correctly. And as a solution — the remodeling of fetch and hiding this feature under the hood.

How does Data Cache work?

The saving of loaded or generated data occurs in the so-called cacheHandler. Out of the box, next.js has 2 options for cacheHandlers — FileSystem and Fetch. This cacheHandler will be used both for caching requests and for caching pages.

FileSystem is used by default, saves data to the file system, additionally memoizing in-memory. FileSystem copes well with its task, but it has one drawback — it works as part of the application. From this it follows that if the application is created in several replicas — each of them will have an independent cacheHandler.

This problem is especially felt when the application works in ISR mode. You need to get into each replica and revalidate the cache in each of them. At the same time, check that they will load the same data. Also, if 2 replicas work with one folder — conflicts can arise in the file system during recording.

Probably for this reason, you can find Fetch variant in the framework code. It saves the cache to a remote server. However, this cacheHandler is only used when publishing the application in Vercel, as it saves data on Vercel servers.

As a result, the out-of-the-box solution does not cover all needs — FileSystem is not suitable if there are several replicas, and Fetch if the application is not deployed in Vercel. An important feature is that next.js allows you to write your own cacheHandler. To do this, you need to pass to the application configuration the path to the file with the class (CacheHandler), in which the get, set and revalidateTag methods will be described:

// cache-handler.js
module.exports = class CacheHandler {
  constructor(options) {
    this.options = options
  }

  async get(key) {
    // ...
  }

  async set(key, data, ctx) {
    // ...
  }

  async revalidateTag(tag) {
    // ...
  }
}

And connect it in the application configuration:

module.exports = {
  cacheHandler: require.resolve('./cache-handler.js'),
  cacheMaxMemorySize: 0, // disable default in-memory caching
}

One of these cacheHandlers is cache-handler-redis, which the next.js team referred to in the last release.

Key points

Next.js caches a large part of the processes.

Caching occurs in several stages — caching transitions and pages in the client router, memoization of the request, caching on the server of requests and pages.

Quite often, applications are launched in several replicas. Replicas need a common cache, especially this is acute when the application works in ISR mode.

The application itself is assembled in several threads that do not have access to each other.

The cacheHandler is responsible for caching. Next.js has two out-of-the-box options — working with the file system and working with a remote server, but the latter is only available within Vercel.

You can write your own cacheHandler.

Caching refinement

Let’s go back to the next-translation package. To solve the problem of unnecessary requests, I came up with an interesting way out — to raise an additional server and process requests going through it — as a result, all requests go from one place, which means caching can be configured in it. This is a principle similar to FetchCacheHandler and the approach in Vercel in general — when during the build the data is cached on the vercel server, and since the server is nearby, this works quickly.

However, caching is too much responsibility for a translation library. The next task was to overhaul the caching logic to combine the next.js API, libraries, and solve common problems. As a result, another library was created — next-impl-cache-adapter.

Cache management

As already mentioned, for a common cache between instances (replicas, copies) — the cache must be separate from each instance of the application. next-impl-cache-adapter solves this by creating a separate service.

This service is a server in which the desired cacheHandler works. Each application instance will process requests through this server. At the same time, the server does not need to be restarted with each build. Outdated data will be automatically deleted during the launch of a new version of the application.Server code:

// @ts-check
const createServer = require('next-impl-cache-adapter/src/create-server');
const CacheHandler = require('next-impl-cache-in-memory');

const server = createServer(new CacheHandler({}));
server.listen('4000', () => {
    console.log('Server is running at <http://localhost:4000>');
});

In this example, the server is passed next-impl-cache-in-memory — this is a basic cacheHandler that saves data in-memory.

A special adapter for working with the cache is configured in the application itself:

// cache-handler.js
// @ts-check
const AppAdapter = require('next-impl-cache-adapter');
const CacheHandler = require('next-impl-cache-in-memory');

class CustomCacheHandler extends AppAdapter {
    /** @param {any} options */
    constructor(options) {
        super({
            CacheHandler,
            buildId: process.env.BUILD_ID || 'base_id',
            cacheUrl: 'http://localhost:4000',
            cacheMode: 'remote',
            options,
        })
    }
}

module.exports = CustomCacheHandler;

The created adapter is connected in the next.js configuration:

// next.config.js

module.exports = {
  cacheHandler: require.resolve('./cache-handler.js'),
  cacheMaxMemorySize: 0, // disable default in-memory caching
}

The package supports three caching options: local, remote and isomorphic.

localStandard solution. The cache is processed next to the application. It is convenient to use in development mode and on stages where the application is launched in one instance.

remoteThe entire cache will be written and read on the created remote server. Convenient to use for applications launched in several replicas.

isomorphicThe cache operates next to the application, but also saves data to a remote server. Convenient to use during assembly, preparing the cache for the moment of launching application instances, but without spending resources on loading the cache from a remote server.

As a cacheHandler, it can be any cacheHandler supported by next.js. And vice versa, cacheHandlers from the package can be directly connected in next.js.

Conclusions

The App Router introduced a lot of very useful updates, but lost in convenience, predictability, and versatility. First of all, due to caching. After all, this is a task in which there is no and cannot be a universal solution. The ability to disable caching for a request and write your own cacheHandler solves most of the problems. However, memoization and caching in the client router remain out of control.

The next.js team itself is in no hurry to develop solutions for specific tasks. For this reason, since the release of the stable App Router, I continue to work on the implementation of packages that solve next.js problems. Along the way, telling about them in articles.

Let’s make the web not only faster, but also clearer.

Links

next-impl-cache — solutions for setting up caching in next.js.

next-impl-getters — implementation of server getters and contexts in React Server Components without switching to SSR.

next-impl-config — adding support for configuration for each possible next.js environment (build, server, client, and edge).

next-classnames-minifier — compression of classes to characters (.a, .b, …, .a1).

next-translation — i18n library, developed with consideration of server components and maximum optimization.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreib3phhpwxd7zupwn7lxll6hsakhkvn4t2wisrxwu3shhwcw4y2dim@png" type="image/jpeg" /></item>
		<item><title><![CDATA[History of Vercel 2020-Present (7/7). Zeit is now Vercel]]></title><link>https://alexdln.com/blog/zeit-is-now-vercel</link><guid isPermaLink="true">https://alexdln.com/blog/zeit-is-now-vercel</guid><pubDate>Wed, 28 Feb 2024 20:35:00 GMT</pubDate><description><![CDATA[In April 2020, tech company Zeit announced a major rebranding. This new turn allowed Guillermo Rauch to return to big business, but at the same time, it became the most controversial decision in the eyes of the community.]]></description><content:encoded><![CDATA[In April 2020, tech company Zeit announced a major rebranding. This new turn allowed Guillermo Rauch to return to big business, but at the same time, it became the most controversial decision in the eyes of the community.

Nonetheless, this decision had objective reasons. Zeit could be labeled a failure in terms of investments — essentially, it was not perceived as a startup and that was the main problem. Some of the investors in the future company Vercel — CRV — said they were “delighted to resume business with Guillermo, after he returned to the path of entrepreneurship and founded Vercel”.

The decision, so critically perceived by the community, turned out to be extremely effective for attracting investors. Immediately after the rebranding, now the company Vercel, announced the collection of $21 million. Soon the company received another $40 million, then $102 million — then Vercel was valued at $1.1 billion, becoming Guillermo’s first unicorn.

In this, the final part of the series, we will talk about the extreme, the most important, and successful project of Guillermo — the company Vercel.


Rebranding

First, it’s worth dwelling on what the company zeit represented. And to describe this is quite difficult — speaking Zeit implied directly the company itself, and often spoke about Zeit Now — for example, in one interview Guillermo was asked a question, starting with “Your products zeit and zeit now”.

The new company was supposed to unify these products. And today, saying “Vercel”, implies both the concept of Zeit and Zeit Now. “Our product consists of two parts: NextJS and a platform distributed around the world, and the business is based on the scalability of this platform” — Guillermo Rauch.

The decision to rebrand caused a surge of emotions in the community — a lot of negative comments, requests to return everything back and/or not to sell the company (it is difficult to understand why many considered this rebranding a sale).

In addition to this, there are more interesting reasons for the rebranding. So, Guillermo noted that the old name sounded differently in different languages and not everywhere it was easy to write what was heard — “The new name was analyzed in five languages by five linguists from different languages of the world”.

The new name — Vercel — was devised by the company Lexicon. According to them: “Zeit needed a name that would reflect the efficiency, superiority, and power of their platform. They also needed the name to be short, spread worldwide, and easy for developers to type command”.

The business model as a whole remained unchanged. “Although anyone can access the library for free, the company’s business model is based on selling software as a service (SaaS) to companies” — Guillermo Rauch.

Investments

As mentioned earlier, the rebranding quickly paid off. Immediately after it, Vercel announced a successful round A — $21 million. Investors were: Accel, mentioned CRV, Naval Ravikant, Nat Friedman, Jordan Walke and others. Accel fund member — Daniel Levine — is a member of the Vercel board of directors.

The company went for the next round only after 8 months, in December 2020, and was able to raise $40 million. The biggest investor in this round was Google Ventures. Also, new investors were Greenoaks Capital, Bedrock Capital and Geodesic Capital. Bedrock founder Geoff Lewis is also a member of the Vercel board of directors.

The next round took place in June 2021. The company raised $102 million, received a valuation of $1.1 billion, thus becoming a unicorn company. The main investor was the company Bedrock Capital. All previous investors were retained and several new ones were added (8VC, Flex Capital, GGV, Latacora, Salesforce Ventures, and Tiger Global).

The last round took place in November 2021, during which Vercel raised $150 million. The main investor was GGV Capital, also all the previous investors supported the company (including Accel, Bedrock, and Google Ventures).

Private investments are also interesting. Guillermo, after leaving Automattic, became an investor himself and began investing mainly in emerging tech companies. Some of his seed investments were companies auth0 and scale, which soon became unicorns (Auth0 became the fifth with Argentine roots, that is, shortly before Vercel).

Soon after, their co-founders (Auth0 — Matias Woloski, scale — Alexandr Wang) — invested in Vercel.

The complete list of Vercel investors can be found on the company’s website.

Total investments in Vercel amounted to $313 million. This allowed the company not only to actively develop and attract people but also to make some acquisitions.

Purchases and Important Actions

One of the most important and valuable actions of the company was the attraction of Rich Harris, the creator of svelte. This happened in November 2021, 2 weeks before the completion of the last round of investments (and possibly played a significant role in the success of the round).

So far, the company has made 2 major acquisitions: the company Splitbee and the utility turborepo. The cost of both purchases is not named.

First, Vercel bought turborepo, in December 2021 — a utility created by Jared Palmer (he is also the creator of Formik and TSDX). “Turborepo is a high-performance build system for JavaScript and TypeScript codebases”. The main features of the utility are parallel assembly of applications in a monorepo and caching processes (including remote ones).

Jared himself joined the Vercel team after purchasing the utility, his main task was to speed up the assembly in Vercel.

The next purchase took place almost a year later, in October 2022. The company Splitbee was purchased — a platform for collecting real-time analytics, created only in 2020.

The co-founders of the company joined the Vercel team, their main task was to develop analytics tools within Vercel (in the service you can buy analytics collection, the price starts from $10 per project).

Open-Source Development

“Supporting open-source projects is an integral part of our mission “Make the Internet Faster””.

Despite several purchases — the main improvements come through integrating third-party services. For example, there is integration with Check.ly, which takes a URL and runs automatic end-to-end tests, simulating web browsers for this interface. Another interesting integration is made with Sanity CMS, together with them Vercel adds the possibility of editing the site on the site itself (next-preview.now.sh).

In total, Vercel has about 100 integrations (Mongo DB, Contenful, Wix, Shopify, AWS, Sentry, Slack, Auth0, etc.).

Vercel owns such frameworks and libraries as Next.js, Hyper, SWR, pkg, turbo, satori, serve, styled-jsx, ai, and other less popular utilities. Also, team members are authors of many libraries.

Next itself after the rebranding became 10 times more popular (from half a million to 5 million downloads weekly).

Vercel is also a sponsor of Nuxt, Astro, webpack, Babel, NextAuth, Parcel, and Unified and other open-source projects.

Team

Today the company has more than 200 employees, only an office in San Francisco, but most of the team works remotely from all over the world. This situation has been in the company from the very beginning, as its parent, Zeit, was founded by an Argentine, a Finn, and a Japanese, who often worked from different places. “From the very beginning, we were a remote company. We worked together from San Francisco, Argentina, Brazil, Finland, Japan, and Germany. We were fortunate to use remote work before covid forced us” — Guillermo Rauch.

In hiring, the company adheres to the policy of Guillermo’s first projects — they attract active participants in open-source. Vercel is also worked on by the creator of Webpack and Turbopack Tobias Koppers, creator of Svelte Rich Harris, React developers — Sebastian Markbåge, Andrew Clark, and Josh Story.

Most of them are hired by the company with the expectation of continuing work on their projects. So, Sebastian still heads the main React team, but also helps support the development of React in Vercel.

Guillermo himself, in addition to Vercel, is involved in investing. In April 2022, he received a second citizenship — American.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreihsiltcmik3dvmhcnwee73papmjc55et5fzjb6ocf4p3stn3d5qmm@png" type="image/jpeg" /></item>
		<item><title><![CDATA[History of Vercel 2015–2020 (6/7). Zeit and Next.js]]></title><link>https://alexdln.com/blog/zeit-and-nextjs</link><guid isPermaLink="true">https://alexdln.com/blog/zeit-and-nextjs</guid><pubDate>Wed, 21 Feb 2024 20:38:00 GMT</pubDate><description><![CDATA[After leaving Automattic in 2015, Guillermo founded a new company — Zeit. Co-founders were Tony Kovanen and Naoyuki Kanezawa.]]></description><content:encoded><![CDATA[After leaving Automattic in 2015, Guillermo founded a new company — Zeit. Co-founders were Tony Kovanen and Naoyuki Kanezawa.

The main task of the company was to provide developers and teams with the ability to easily develop, preview, and deploy their applications.

“One of my dreams is that the next Facebook or Snapchat will be created by someone who didn’t have to go through all this training, develop these connections, and hire these bright people. It could be a girl from Africa or a boy from Bangladesh” — Guillermo Rauch.

This part will discuss the most uncertain period in the history of Vercel — the time when the company was becoming known to the community and fading from investors’ view.

Zeit first became known thanks to its product Now — a tool for deploying applications with a single command from the terminal, allowing developers to easily assemble their projects and instantly share them.


Now

The utility’s purpose was simple: You type “now” in the command line and get a new server. This concept is basically inherited by Vercel and even described in its main slogan “Develop. Preview. Ship”.

After entering the command, within a second, a new instance of the application is created and published on the Internet. You can share the link to the published application immediately, even before the build is completed — initially, the build process will be displayed on it, and then the application itself.

Guillermo described the procedure as follows:


An application for PC — “Now Desktop” was also created for the utility. The utility itself supported the publication of static sites, next.js applications, as well as applications on go, php, node.js, python, rust, and other options.

Next, a global DNS solution — “Zeit World” was created.

HyperTerm

The Now utility had its page on the company’s website. At the very beginning of this page, there was a demo of its work in the terminal. And what is interesting here is not so much an example of work, but the fact that the terminal was added not as a gif, but written in pure html, css, and js.

Guillermo liked the demo result, as it looked like the simplest terminal. Then he thought about a new project — Hyper.app. The development of the first version took about two weeks. The terminal, like most other company projects, was published in open access and immediately after a quick presentation attracted the attention of developers — they began to actively participate in the development of the utility and soon more than 100 plugins were written for the terminal.

HyperTerm itself was created on Electron. This allowed you to open the developer console at any time and make various changes. You can also install ready-made plugins, many of which are collected in a special list.

The terminal is still highly popular (gihub, website).

Next.js

In 2016, Zeit released Next.js — a framework for creating Jamstack-style websites. The framework page lists the authors: Tim Neutkens, Naoyuki Kanezawa, Guillermo Rauch, Arunoda Susiripala, Tony Kovanen, Dan Zajdband. All but Dan worked at Zeit. Dan is familiar with Guillermo from JSConf Argentina, and probably as a developer of The Lift company (which used cloudUp, Guillermo’s second startup).

Next.js was initially released as an open source project on GitHub on October 25, 2016. The framework offers out-of-the-box server-side rendering, static site generation, API routes, and more. The goal was to provide critical features that React lacks — primarily in terms of speed and SEO optimization.

“We work for those who do front-end design to make e-commerce sites, media and everything else better… Everything should look good, sites should load quickly” — Guillermo Rauch

Today, Next.js is used by Uber, Amazon, Open AI, and thousands of other companies. Being a framework for React, Next.js has become an important part of the ecosystem — “NextJS takes it to a new level. And now many ideas from NextJS inspire React itself”.

Zeit Platform

Despite the fact that applications were published through the Now utility — it was just a part of the Zeit platform.

After entering the platform, you could publish up to 3 applications for free through Now.

However, you could not link domains in the free tariff. This was a paid feature, the price started from $15, and, for example, support cost from $750 to $2000. Also, you could buy domains directly in the platform.

The pricing policy was changed in December 2018 and, most importantly, limits on the number of applications and linked domains were removed, and for many categories, the price was calculated based on the resources spent over the limits. It lasted until the end of 2019, when it was changed again and fixed tariffs from $20 appeared.

Co-founders

Guillermo Rauch

In addition to the utilities mentioned in the article, Guillermo participated a lot in conferences and presented Now and Next.js. Also during this period, he was a mentor for the “Open Source Engineering” course organized by Stanford.

Tony Kovanen

Tony worked with Guillermo at Automattic and mainly worked on the Jetpack plugin. He left Automattic with Guillermo and became a co-founder of Zeit. In it, he will hold the position of CTO.

At Zeit, Tony participated in the development of next.js and Now. He worked until 2017, after which he will move to Gatsby. Now Tony is working on the Based.io platform.

Naoyuki Kanezawa

Naoyuki was one of the main developers of Socket.IO and Engine.IO (created by Guillermo).

Now Naoyuki remains part of the Vercel team in the position of Infrastructure/Backend Developer.

Team

From the team it is worth noting:

Tim Neutkens, creator of Micro and MDX;

Arunoda Susiripala, creator of React Storybook;

Igor Klopov, creator of Pkg;

Nathan Rajlich, creator of node-gyp;

Javi Velasco, creator of React Toolbox.

Nicolas Garro / Evil Rabbit, Founding Designer and Brand Architect.

Zeit Day

Also, the Zeit company organized one-day conferences — “Zeit day”, the first of which took place in 2017.

Investments

Zeit can be called a failure in terms of investments — in essence, the company was not perceived as a startup and this was its main problem. So, one of the investors in the future Vercel company — CRV — said when investing in the latter that they are “glad to resume business with Guillermo, after he returned to the path of entrepreneurship and founded Vercel”.

Actually, this was one of the main reasons for the rebranding that happened. And soon Vercel will become Guillermo’s first unicorn (and the sixth unicorn with Argentine roots).]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreic6ooddgxp2qgm5ga5lo2exvzxxddebqj565s65z4fo5aekysqxgq@png" type="image/jpeg" /></item>
		<item><title><![CDATA[History of Vercel 2013–2015 (5/7). Automattic]]></title><link>https://alexdln.com/blog/vercel-automattic</link><guid isPermaLink="true">https://alexdln.com/blog/vercel-automattic</guid><pubDate>Wed, 14 Feb 2024 20:41:00 GMT</pubDate><description><![CDATA[Automattic. A company that played a massive role in shaping the modern internet and deserves a separate series of articles. However, it will only be touched on superficially here.]]></description><content:encoded><![CDATA[Automattic. A company that played a massive role in shaping the modern internet and deserves a separate series of articles. However, it will only be touched on superficially here.

In May 2003, Matt Mullenweg, together with Michael Little, founded a new platform for publishing blogs. It was not the first such platform; the most popular at the time was b2/cafelog. Indeed, Matt and Michael were the developers of that platform, but they decided to create their own product. The new platform developed rapidly and quickly gained a large audience.

This part will discuss what Guillermo Rauch and the Learnboost team did after Automattic acquired them.


Automattic

On May 27, 2003, Matt announced the availability of the first version of the new platform and named it “WordPress”.

The main task of Automattic itself (besides developing WordPress) became hosting and supporting websites written in WordPress. The main principle of monetization was that any user could create a site on WordPress, publish it for free, and then, if necessary, pay for additional features.

Automattic raised 6 million in Series A and B funding rounds from investors including True Ventures, Polaris Venture Partners, The New York Times, and others. In May 2013, Tiger Global invested 50 million dollars in purchasing shares from Automattic’s early investors. By that time, about 20% of websites were running on WordPress.

Automattic actively purchased interesting products, primarily due to the people working on them. In 2013, they met with Guillermo and Tian Lu.

Sale of CloudUp

On September 25, 2013, after the meeting, Automattic announced the purchase of CloudUp, along with the company LearnBoost and all related libraries (including socket and mongoose).

This was Automattic’s 12th acquisition (after Lean Domain Search, Poster, Simperium, CodeGarage, After the Deadline, Blo.gs, PollDaddy, IntenseDebate, BuddyPress, Gravatar and Plinky).

The CloudUp team set about updating the editor and tools related to media in WordPress. Matt himself (the founder of WordPress) admitted that CloudUp was significantly better than the WordPress media library. The editor was planned to implement real-time editing for simultaneous work on texts by several people.

Life at Automattic

The LearnBoost and Automattic teams had a similar view on open development — they participated in conferences, created and supported open-source projects. “Automattic and us share a history and vision: we have a distributed workforce, we passionately care about creating a better web and we support our open-source roots” — Tian Lu (co-founder of CloudUp).

The team took on the tasks set for them, but they did not plan to stop working on CloudUp — they created new projects, packages, and talked about the imminent expansion of the service.

They also continued to work on open-source projects. Thus, in 2014, the first stable version of socket.io was introduced.

Nevertheless, the story of the CloudUp service itself ended there. The site remains unchanged and continues to lie on the internet with a field for applying to join the testing.

The Team’s Future

Tian Lu

After the company’s acquisition, Tian remained the general manager of Cloudup, overseeing the creation of a new technology stack and a completely new editor.

Also, in 2013, Tian will create his company Tsukemen. As a co-founder of two startups, Tian raised nearly 5 million dollars from CRV, Bessemer, RRE, and other investors before a successful exit.

Now, Tian is a Vice President of Product at Blockchain.com, responsible for product strategy and design. He joined the Blockchain.com team through the acquisition of his company Tsukemen.

Nathan Rajlich

Since 2013, Nathan has been working at the WordPress company on editors.

In 2014, he spoke at a conference in Buenos Aires on the topic of “Writing a webmodule” — about writing npm modules intended for use in browsers [speech].

Until 2016, he worked at Cloudup and Automattic. He currently works at Vercel.

Tj Holowaychuk

In 2014, while already working at segment.com as a backend developer, he will participate in the development of YAL. In 2014, TJ will write the article “Farewell Node.js” (goodbye node.js) with an official farewell to node.js and a transition to the Go language. However, he continues to support his projects on js (primarily koa). In 2014, TJ will sell express to StrongLoop (but that’s a whole other story).

This was not just a departure from Node.js, but also a partial departure from programming and open-source. The following are TJ’s words:

“No, I have no intentions, my new goal is to live better. After all, open-source doesn’t pay the bills, so it’s better to focus on other things or if you just like a project, that’s cool.

Now I spend most of my time enjoying other things, I code for 2–3 hours a day if I don’t like something. Time is your real currency! Money is good, but don’t waste time. If you really like the project you’re working on, then do it, but don’t neglect other areas of your life (or people).”

In 2016, TJ will create the Apex company, and Guillermo — the Now utility and the Zeit company.

Guillermo Rauch

Guillermo also worked on the development of WordPress, for example, he redesigned the video platform — VideoPress. For this, he “used the Virtual DOM approach and wrote a very simple version of React,” making it interactive and convenient. He added features such as searching for a moment by frames and embedding video functionality.

Guillermo left WordPress on October 13, 2015, and founded a new company — Zeit.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreie2tr5bsohrnha3u4wh6qmmhmzhvjsefyj3vx2w2o55cvywdkznnm@png" type="image/jpeg" /></item>
		<item><title><![CDATA[More libraries to the library god or how I remade i18n [next.js v14]]]></title><link>https://alexdln.com/blog/libraries-i18n</link><guid isPermaLink="true">https://alexdln.com/blog/libraries-i18n</guid><pubDate>Tue, 13 Feb 2024 20:45:00 GMT</pubDate><description><![CDATA[There are dozens of amazing libraries made for internationalization, such as i18n, react-intl, next-intl. They all do an excellent job of adding translations to an application or website. Most of them are tested, debugged, and consistently supported.]]></description><content:encoded><![CDATA[There are dozens of amazing libraries made for internationalization, such as i18n, react-intl, next-intl. They all do an excellent job of adding translations to an application or website. Most of them are tested, debugged, and consistently supported.

But they are all outdated.

After all, during this time, the react ecosystem has been developing. The latest version of next.js has major updates from react.js — cache, taint, new hooks, and, of course, server components. The React.js team will likely introduce these changes in May.

In this article, I will talk about key changes, personal experience, problems with existing solutions, necessary updates, solutions I came up with, and, of course, answer the questions of “why” and most importantly — “why”?

Changes

The first thing to start with is how the changes in React.js made translation libraries obsolete.

Despite the fact that the latest stable version of React.js was released almost two years ago, it has 2 other channels — canary and experimental, where canary is also considered a stable channel and is recommended for use by libraries.

This is the channel that Next.js uses. Next.js launched server components without additional flags inside the so-called App Router — a new directory as an alternative to pages, which uses its conventions and different sugar (about changes and problems which I wrote in a recent articlearticle).

Server components definitely solve a number of problems and are a new milestone for optimizations. Including for translations. Without server components, translations were stored both in the compiled HTML and as a large object in the client script. Now you can get ready-made HTML, which doesn’t need anything on the client.

Next.js paid special attention to this feature.

Personal experience

You can add translations (according to the Next.js documentationNext.js documentation) as follows:

// app/[lang]/dictionaries.js
import 'server-only'

const dictionaries = {
  en: () => import('./dictionaries/en.json').then((module) => module.default),
  nl: () => import('./dictionaries/nl.json').then((module) => module.default),
}
export const getDictionary = async (locale) => dictionaries[locale]()

// app/[lang]/page.js
import { getDictionary } from './dictionaries'

export default async function Page({ params: { lang } }) {
  const dict = await getDictionary(lang) // en
  return <button>{dict.products.cart}</button> // Add to Cart
}

This solution is described as ready and fully optimized. It works entirely on the server, and the client already receives ready-made HTML. However, the Next.js team missed one important detail — how to pass the language deep into server components.

A big problem with server components is that contexts are not available in them. The Next.js team explains the absence of these functions by the fact that Layout is not re-rendered, and everything that depends on props should be client-side.

Perhaps translation libraries were most affected by this. As a temporary solution, they suggest determining the language in the middleware and adding it to cookies. Then when building the page, read it in the necessary places. But reading cookies means enabling server rendering, which is not suitable for everyone.

In general, the main problem with existing solutions is that most of them are not made for server components. Components and functions were developed for runtime, using hooks and synchronicity.

Another inconvenience was caching in Next.js. Namely — it works fully only for GET requests, and if the weight of translations is more than the limit of 2MB — they will not be cached.

Implementation

Goals and tasks:

To my surprise, there is not a single library that would satisfy all these requirements.

The first thing you need is functionality. In the standard version, this is a hook that returns the function t and the Trans component for more complex translations. However, such functionality is needed in server components, and they have many of their own features.

Functionality

The main functionality is divided into two versions — for client components and for server ones and includes:

useTranslation, getTranslation - which return the function t inside the DOM and the language;

import getTranslation from 'next-translation/getTranslation'

export default function ServerComponent() {
  const { t } = getTranslation()

  return (
    <p>{t('intro.title')}</p>
  )
}

'use client';

import useTranslation from 'next-translation/useTranslation'

export default function ClientComponent() {
  const { t } = useTranslation()

  return (
    <p>{t('intro.title')}</p>
  )
}

The interface turned out to be quite familiar, the functions support namespace and query. It is recommended to use it by default, as it is simple to use and in logic. Returns a ready-made string.

For more complex translations, you should use the ClientTranslation and ServerTranslation components. They can replace pseudo-components with real ones.

import ServerTranslation from 'next-tranlation/ServerTranslation';

export default function ServerComponent() {
  return(
    <ServerTranslation
      term='intro.description'
      components={{
        link: <a href='#' />
      }}
    />
  )
}

"use client";

import ClientTranslation from 'next-tranlation/ClientTranslation';

export default function ClientTranslation() {
  return(
    <ClientTranslation
      term='intro.description'
      components={{
        link: <a href='#' />
      }}
    />
  )
}

There are also cases when translations need to be added outside the react tree. For this, you can use createTranslation anywhere.

import createTranslation from 'next-translation/createTranslation'
// ...
export async function generateMetadata({ params }: { params: { lang: string } }) {
  const { t } = await createTranslation(params.lang);

  return {
    title: t('homePage.meta.title'),
  }
}

Page setup

Now about setting up the page. To work with translations, you need to know the language. However, in server components, you cannot use context. For this, an alternative to createContext was made for server components in the next-impl-getters package - createServerContext and getServerContext.

In the package for this, you need to create a NextTranslationProvider. It is recommended to do this at the page level to avoid problems with Layout re-rendering.

import NextTranlationProvider from 'next-translation/NextTranlationProvider'

export default function HomePage({ params }: { params: { lang: string } }) {
  return (
    <NextTranlationProvider lang={params.lang} clientTerms={['shared', 'banking.about']}>
      {/* ... */}
    </NextTranlationProvider>
  )
}

It is also necessary to indicate which translations are needed specifically on the client and to pass only them there. To do this, you can pass an array of client keys or groups to NextTranslationProvider using the clientTerms prop.

Also, sometimes situations arise when a component needs different translations or different blocks are rendered depending on conditions. In such cases, different translations need to be passed to the client. Condition options can be wrapped in NextTranslationTransmitter and client terms are passed to it.

import NextTranslationTransmitter from 'next-tranlation/NextTranslationTransmitter';
import ClientComponent from './ClientComponent';

const ServerComponent: React.FC = () => (
  <NextTranslationTransmitter terms={['header.nav']}>
    <ClientComponent />
  </NextTranslationTransmitter>
)

As a result, only those terms that were specified above in NextTranslationProvider or NextTranslationTransmitter will be passed to client components.

Package setup

Before working with translations, they need to be loaded. For this, you need to create a configuration file in the root of the project. Its minimum configuration is the load function, which will return current translations and an array of languages with permissible languages. The load function is called in server components, and the necessary keys will be passed to the client.

A very important point was the absence of unnecessary requests, that is, full caching is needed.

Here it is worth digressing a little. Starting from the latest version, Next.js builds the application in parallel in several processes. If each process lived with its own cache — requests would be sent from each. Probably, to avoid this, the Next.js team redesigned fetch — now it works with a common cache.

The package solves the problem in the same way — it creates a common cache and works from each process already with it. For this to work, you need to use withNextTranslation in next.config.js.

Conclusion

The solution turned out to be truly tailored to next.js — taking into account all its capabilities and problems. It also includes all the optimization capabilities provided by server components. The package is fully optimized for next.js, their concepts and views, which I fully share.

I faced the problem of translations and I had to make my own solution, which would work exactly as expected.

Despite the significant advantage in optimizations, the package is still inferior to large libraries in terms of translation capabilities. There is a lot of work ahead.

P.S. I will be grateful if you describe what you lacked in existing solutions or what functionality you consider most important.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreihefewoxfiip6ysroyns5pl2k4mfnea5jos2uxohphq3nqn5oqpgi@png" type="image/jpeg" /></item>
		<item><title><![CDATA[History of Vercel (4/7). 2013. Cloudup. Drag. Drop. Stream.]]></title><link>https://alexdln.com/blog/vercel-cloudup</link><guid isPermaLink="true">https://alexdln.com/blog/vercel-cloudup</guid><pubDate>Wed, 07 Feb 2024 20:56:00 GMT</pubDate><description><![CDATA[Cloudup is a clear and fast file-sharing service for files, videos, links, music, documents, code, text, and so on, which is both user-friendly and recipient-friendly.
Drag. Drop. Stream.]]></description><content:encoded><![CDATA[Cloudup is a clear and fast file-sharing service for files, videos, links, music, documents, code, text, and so on, which is both user-friendly and recipient-friendly.

Drag.

Drop.

Stream.

“We created Cloudup because we were disappointed with sharing services that took too much time for conversion, storage, management, and uploading — we just wanted to share anything, anytime.” — CloudUp team.

In this part, I want to talk about the history and reasons for creating the service and what came out of it.


Reasons

The LearnBoost educational platform, Guillermo Rauch’s first startup, had already become popular and secured its place in the market by this time. It was a simple, convenient and fast service that solved all problems related to education. However, the interests and needs of teachers did not end there — they wanted to share their work, publish lesson plans, share photos from lessons in blogs, and talk about their work on social networks.

The LearnBoost development team also shared a large amount of information — images, videos, documents, presentations — for personal or work purposes. Different tools were used for each task.

LearnBoost users needed a tool that would allow them to share files with all network users. In existing solutions, you had to log in to view the file. Another disadvantage was the lack of cross-browser compatibility.

CloudUp Video Presentation — https://cloudup.com/cYiu8eWgvxt

Competition

CloudUp was not the first to enter the market in this area. By that time, there were already startups in the data storage sphere (WeTransfers and YouSendIts), big players (Google Drive and Dropbox), and even social networks (Facebook, Twitter) were quite good at this task. Nevertheless, the goal of CloudUp was slightly different — not just to store files, but also to share them. But there were competitors at that time also — DropLr, CloudApp, Ge.tt.

The most advanced service at that time was Dropbox, but its main task was (and remains to this day) data storage, not file sharing and distribution.

On June 20, 2013, Guillermo and Tian Lu announced the creation of a new file-sharing service — CloudUp. From the LearnBoost team, TJ, Meredith, and Nathan also participated in the development of cloudup.

Tasks

The main goal was to make the sharing of images, links, videos, code, documents, and everything else — simple and beautiful, both for service users and for those with whom these users share.

To do this, the following tasks needed to be solved:

Functionality

Drag. Drop. Stream. Three simple steps for files to take a huge path from the user’s device to viewers.

The service absorbed all the experience accumulated by the team, was built on modern technologies (many of which were created and developed by the LearnBoost team itself), a minimalist design was created for it. But this was not enough to stand out among the existing giants and become a market leader. The service needed to stand out, improve all existing solutions, and create new ones.

Cross-platform and cross-browser compatibility

Google Chrome only surpassed IE in popularity in 2012. By the time of the CloudUp launch (June 2013), cross-browser compatibility was a big and important task. In addition to directly supporting all browsers, the service also solved the problem of file format support. The service converted documents into PDF, performed video transcoding, and compressed images for devices with weak internet connections.

Grouping and preview

Existing alternatives displayed folders in a “list” format, CloudUp did so in a “tile” format and called this not folders, but streams. Streams differed from folders not only in their display style but also in their content.

In the “folders” of competitors, files were stored as a list, with basic icons. Streams, on the other hand, contained tiles with files that allowed you to see all the content in preview mode, be it a document, photo, file, or video. Thus, in these streams, you could easily and quickly find the necessary files. This worked with a really long list of files, whether they were simple documents, raw photos, or even psd, eps, and AI files, preview was available for everything.

Also, directly from the service, you could view the meta-information about the file, such as weight, extension, creation date, resolution, and much more.

Editing and security

Each stream was assigned a unique identifier that could not be forged or predicted, so the only way to view a specific stream was to get a link. For additional stream protection, it could be password-protected.

At any time, the file could be edited, replaced, password added or removed, or deleted.

Speed

One of the main features of LearnBoost was real-time operation. The company that created the first library for these purposes — socket.io was a leader in this area. The next startup was no exception, all files were uploaded and available in real time. As soon as the user started uploading files, a link would be immediately available to him, which he could share with his friends. The upload did not occur as a whole file, but as a stream, which was a significant advantage, primarily for video.

Also, the product followed the ideas of web 2.0, that is, it was a single-page application. https://cloudup.com/blog/the-need-for-speed

Application

Along with the presentation of the service, an application for it was immediately presented. Initially only for OSX, but in the near future, apps for other platforms were planned.

In addition to the main functionality, the application could also:

And these promises were kept. Almost immediately after the launch, a Windows application was presented.

The application could also track screenshots and automatically upload them.

Result

On June 20, 2013, the service launched pre-registration. The service offered users 1000 files up to 200MB each for free. In early September, the service launched in beta testing and sent out 10,000 invitations. In just a few weeks, users uploaded over 300,000 files, the total volume of which was almost 1,500 GB.

CloudUp quickly and confidently entered the market for cloud solutions, but to secure a strong position in it, the team had to do a lot of work. The service had a huge and unique functionality, which favorably distinguished it from competitors, but this was not enough for establishment. The next step was to consider monetization.

The free tariff provided for the upload of up to 1000 files, which was enough for an ordinary user. For business, special tariffs, applications for all platforms, and collaborative work on files in real time were needed. The company continuously improved its applications, the performance of the service, supported file formats, thereby gradually closing all market needs.

The first company to fully switch to CloudUp for internal purposes was The Lift, on August 12, 2013 (one and a half months later). theLift was engaged in product development from concept to product, from software development to the creation of marketing materials. The company’s headquarters were located in Southern California, and there was also an office in Buenos Aires.

The second company was Sawhorse. Sawhorse is a full-service production company helping startups and large companies share their stories through video and post-production campaigns. This, perhaps, was the first company well familiar with the product and the team, as they made the introductory video for CloudUp.

Also, CloudUp launched a “scaling program” — a referral program, so that users could invite their friends to the service.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreiccznpr57p7fdz5tqa7252e4x3sjuk4sqlzqxmshiujjqlvruohvm@png" type="image/jpeg" /></item>
		<item><title><![CDATA[History of Vercel (3/7). 2009–2013. LearnBoost. Team that has become a leader in open source]]></title><link>https://alexdln.com/blog/vercel-learnboost</link><guid isPermaLink="true">https://alexdln.com/blog/vercel-learnboost</guid><pubDate>Wed, 31 Jan 2024 21:00:00 GMT</pubDate><description><![CDATA[…one of the most technologically advanced companies… socket, stylus, mongoose, n, express… Nathan Rajlich, Aaron Heckmann, TJ…]]></description><content:encoded><![CDATA[…one of the most technologically advanced companies… socket, stylus, mongoose, n, express… Nathan Rajlich, Aaron Heckmann, TJ…

LearnBoost is an online tool that efficiently manages classrooms. The company addressed the problems of every participant in the learning process, supplementing this with quality, convenience, and speed, gaining significant advantages in these characteristics compared to existing competitors at that time.

This part will talk about how the educational platform became one of the most tech-savvy companies and how it was able to attract, without exaggeration, the best js developers to its ranks.


Technical Vector

LearnBoost was introduced to the public in 2010, but its development began the previous year, 2009. Learnboost began to be designed according to the standard structure of the time — one language for the backend and js for the frontend — there were simply no other options at that time. An alternative to this approach appeared a little later, when Ryan Dahl created Node js. In 2009, Guillermo was on an IRC forum with Ryan Dahl, who talked about version 0.1, warning that the tool was still not ready. However, the idea to make universal rendering and simplify the development of asynchronous services attracted listeners, and Guillermo was no exception. In the end, they decided to build everything in one language. “Why not?”.

Although there were many reasons against it. The problem was that these were completely new technologies, approaches, databases. They had shortcomings and vulnerabilities, so the team created their alternatives based on existing solutions or created them from scratch and opened the source code (the licenses of most open-source products allow you to clone and edit the code, but the resulting project must be promoted under the same license, including with a clause about open source code).

In addition to third-party tools, there were problems with node.js itself, as it was too raw at that time. Around Node.js, a large community quickly formed, but it could not cope with the flow of tasks and difficulties. Thus, one of the first contributors to node.js will write an article in the future about how he can no longer fight it [article]. Ryan Dahl, at the release of version 0.1, predicted the unreadiness of the tool. The LearnBoost team had to make many changes — “from small utilities to patches and HTTP servers” — in the Node.js repository.

Team and open-source

One of the main advantages of open-source, which Guillermo saw, was the opportunity to communicate with the smartest people on the planet — to work together on libraries, fix and develop the ecosystem. Such acquaintances give not just a pleasant experience of communication, but also a sufficient number of talented programmers who can be attracted to your project.

The first major open-source project in Guillermo’s history was Socket.io, which ranked 3rd in 2012 in terms of the number of stars on github, overtaking express js.

Most libraries were developed and evolved primarily for the company’s needs. For example, in 2010, the team was tasked with styling their products, but none of the existing alternatives suited them, so they created their solution. The created preprocessor was named “Stylus”. Together they developed tobi, Cluster, knox S3 Node Amazon S3 Client, node-canvas and many others.

LearnBoost was not just one of the most promising startups, but also one of the most technologically advanced. In 2011, they were in second place in terms of the number of subscribers to open-source, overtaking github, twitter, mongo and only losing to facebook. The number of subscribers is often directly proportional to the number of contributors, which gives a huge advantage in the speed of development, as more than 50 repositories on Github, including Mongoose and Socket.io, were developed by the community.

The company has reached a high level in the community, and there is a great merit in its policy. But to a much greater extent, this is the merit of the people who stood behind the company for many years — both the authors of these libraries and other participants in the process, creating a comfortable environment for development and all the necessary conditions.

Rafael Corrales. Co-founder

He graduated from the Georgia Institute of Technology. From a young age, he understood that he wanted to do business, whether as a co-founder or as an investor. Therefore, after the institute, he entered Harvard Business School. By the time the company was founded, he was studying in the second year. He ran a personal blog and actively managed social networks.

“In addition to this, keep in mind that this is a game consisting of several rounds, if you think about it. Being a guy who is over 20, even if LearnBoost fails, and of course I hope it won’t, I’m going to be at the startup table in one form or another, presumably for the next 20, 30 years.” (Mixenergy interview from November 5, 2010).

Rafael actively communicated with angels and investors, attracting them to the project, and also continued to develop in the field of business in general. In 2012, alongside supporting learnboost, he became an advisor in the company Instacart. And on March 1, 2013, Rafael will move from the section of co-founders of the company Learnboost to the section of a member of the board of directors.

On April 1, 2013, Rafael became a member of Charles River Ventures, one of LearnBoost’s investors, and since then Rafael has been in the status of an angel and will participate in this role in future startups by Guillermo.

Thianh Lu. Co-founder, product manager / designer.

Thianh Lu has a Bachelor of Business Administration in Finance and Marketing from the University of Massachusetts Amherst, graduated from the Eisenberg School of Management in 2002 and received a Master of Business Administration in Finance and Entrepreneurship from the Carroll School of Management at Boston College in 2008.

He started his career at Zecco (online broker), where he developed a design platform for trading options. There was no full-time designer in the company and therefore all the design work was done by Thian.

Meredith Ely/Bordoni. Community and Marketing Manager

Before coming to LearnBoost, Meredith also worked in education. For the previous 4 years, she worked at Stanford University, Kappa Kappa Gamma, and Teach For America.

Meredith oversaw the development and support of clients, content, event planning, sales, and also engaged in marketing, telling about LearnBoost and its free opportunities to teachers and schools.

In 2010, she began hosting Ed-Tech Meetup — a movement that brings together teachers, technologists, and entrepreneurs for communication, experience sharing, learning, as well as creating a closer feedback loop between teachers and innovators in education. She actively engaged and developed this event and by 2012 the number of Ed-Tech Meetup participants was more than 2000 people. She coordinated, organized and conducted dozens of events to bring together innovators and people from the education sector. In 2012, she will hand over the management of the event to the EdSurge company. Today, Ed-Tech Meetup is the largest event in the US in the field of technologies for education.

Rafael, Thian, and Meredith created a unique look for the company and excellent conditions in which the development team was able to create an excellent product and leave a huge mark in the history of open-source.

Guillermo Rauch. Co-founder, Developer

By the time the company was founded, Guillermo already had experience in developing open-source libraries. He developed several plugins for WordPress and was on the list of main developers for the MooTools library.

The establishment of the company was not the final point in this vector. The company was based on a completely new technology — Node.js, for which there were virtually no ready and high-quality solutions. In addition to this, the company had quite modernist desires, such as speed and interactivity of the service. For this, they needed real-time.

WebSockets at that time were still in the stage of protocol refinement, and a more or less stable version would only be developed in the following, 2011 year.In 2010, Guillermo gave a presentation at JSConfEU on socket.io, which by that time already had 1000 stars on github. Although Websockets existed by then, they had very limited support and modest functionality, so an overlay was needed on this API, similar to it, but solving the problems of cross-browser and cross-platform. Guillermo demonstrated the possibility of “real-time collaboration” [presentation]. The demo used on the presentation also used express, AJAX, jade (now pug) and mongoose.

In 2011, a library was written to launch a server with sockets — engine.io. The first version of socket.io was used by MSOffice in 2012 to add real-time collaboration support (presentation). Guillermo actively traveled to conferences and talked about the libraries created. Primarily about socket.io. He even organized conferences, for example, in Argentina, in 2012.

The project needed a database. Therefore, initially support was developed for a MySQL database (https://github.com/rauchg/node.dbslayer.js) in node.js. Introduced in 2009, MongoDB attracted the attention of the company and soon it was chosen for the LearnBoost project. The company stuck to the idea of “everything in js” and the next library developed by the team was “Mongoose”.

At the end of 2011, Guillermo developed the juice library, which allowed creating elements with inline styles (written in the <style /> tag) and worked both on the server and on the client.

A little later, in the same 2011, there was a fire in Guillermo’s house and Guillermo “lost all his property”.

Another extremely important member of the LearnBoost development team was TJ Holowaychuk. A web designer, web developer, CEO, SEM until 2008 actively participated in the development of projects in Drupal, and from 2008 in open-source projects on github (which was created in 2008).

TJ Holowaychuk. Developer

TJ was born and raised in Canada. He started his career with design but never limited himself to it. He wanted to handle all aspects of product development and therefore started learning programming. His first attempts were during design development when he used Flash to write various scripts. He did not read books or attend a special school to learn programming, he just read other people’s code and delved into the details.

In 2009, TJ became one of the main contributors to node.js. At the beginning of the following year, 2010, he created a framework for node.js, which remains the most used and interesting backend framework to this day — express.js.

TJ actively participated in open-source and of course, interacted a lot with github. To simplify and improve interaction with this ecosystem, TJ created the git-extras library in 2010. He developed or participated in the development of Connect, Dox, n, Luna, Stylus, git-extras, Mocha, SuperTest, SuperAgent, EJS, Co, Commander and many other popular libraries.

Already in 2010, he considered testing an important component in project development. TJ developed testing packages: expresso and should.js. He is also the author of the jade library — an HTML preprocessor written in js. In the future, due to copyright on the name, the library will be renamed to pug.

The startup was based in San Francisco, but no one saw TJ, he worked remotely. TJ never led a public life. There are only a few photos of him on the entire Internet, nothing is known about him, and there are only theories (including that he is not real), one of which is that someone saw him in Argentina at JSConf. In fact, this is not a theory, a video after this conference exists. This was the same conference organized by Guillermo in 2012. They held a joint workshop, which was never published for public access.

In 2012, TJ wrote the Axon library (message-oriented socket library).

Nathan Rajlich. Developer

Nathan has been a contributor to node.js since 2010 and actively participated in the development of this platform until 2015. Also in 2010, he wrote the java-websockets library, which contained the client and server parts and was completely written in java.

Nevertheless, he joined the company in 2011 as a junior js developer. At the same time, he wrote the NodObjC package, and in 2012 — node-gyp. His collection includes dozens of helper packages for node.js and hundreds of other packages, developed by him or to which he contributed.

Aaron Heckmann. Developer

Aaron joined learnBoost six months after its foundation. Like the other members of the development team, in addition to his main job — platform development — Aaron supported and created open source projects.

Also in the noteworthy year of 2010, Christian Kvalheim started working on the mongoDB engine for node.js and actively developed this ecosystem for the next 2 years, and in 2012 this engine was officially included in the core of mongoDB.In 2010, Aaron also worked on interfacing with mongoDB, and we owe him primarily for the development of mongoose.js.

In 2012, mongoDB not only included the node.js engine in the core, but also permanently hired Christian Kvalheim and Aaron Heckmann.

While working at mongoDB, Aaron created many open-source packages (primarily for mongoDB).]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreihk2eztdeq253i3jtudwjrfpiunvf2rpx4gnumcl5nqryfmw7efzi@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Next.js App Router. Experience of use. Path to the future or wrong turn]]></title><link>https://alexdln.com/blog/nextjs-app-router-experience</link><guid isPermaLink="true">https://alexdln.com/blog/nextjs-app-router-experience</guid><pubDate>Thu, 25 Jan 2024 21:03:00 GMT</pubDate><description><![CDATA[Two years ago, the Next.js team introduced a new approach to routing, which was supposed to replace the so-called Pages Router and added a range of fundamentally new functionality.]]></description><content:encoded><![CDATA[Two years ago, the Next.js team introduced a new approach to routing, which was supposed to replace the so-called Pages Router and added a range of fundamentally new functionality.

Practically in every release, I found plenty of useful and necessary things for both personal and commercial projects. Nevertheless, I bypassed the 13th version for commercial projects, as the functionality proved to be extremely unstable and insufficient. However, now this functionality has been moved to the stable category, the App Router is considered the main one, and the Pages Router is rather supported for backward compatibility and gradual transition.

Next.js has taken a big step, taking responsibility for caching and working with requests, adding server components, introducing parallel and interception routes, as well as a series of other abstractions. This article will discuss the reasons for this step, the possibilities, problems, and personal opinion — was this a step into the future or a step straight into a pit.

Retrospective

Before diving into the latest update, it’s worth describing a bit of retrospect. I have been using next.js since version 8, for about five years now and have closely followed all subsequent updates, I delve under its hood for understanding (and occasionally fixing) problems and make additional insignificant packages. My impressions of the new version are almost always repeated — from “what the hell is that” and “who did this!”, to “this is brilliant!” and back around the circle.

What I definitely understood over these years — next.js cannot be called the ideal of stability. If the functionality has moved into the stable categories — it means you should wait a couple more versions until its main errors are fixed (if you’re lucky). I encountered a dozen bugs, some of which were the reason for rolling back months of team work, and some were a solid basis for temporary crutches (well, you understand).

The number of fixed bugs even became an item of recent releases.

Regarding the stability of Next.js, an interesting article was recently published by one of the Remix developers (at the moment former) Kent Dodds, to which subsequently responded the VP of Vercel — Lee Robinson. Personal opinion about this dispute will be at the end of the article.The framework has been very actively developed all these years. Frequent major updates have led to the emergence of a large number of bugs, but at the same time to the rapid implementation of very useful and promising functionality (there is no better test environment than prod).

For example, with the appearance of rewrites and redirects, it became possible to abandon frequent changes to nginx, with the implementation of middleware — rewrite the routing handling logic and improve the capabilities of a/b tests. With the advent of ISR — to make the possibility of updating pages without re-deployment of the service, with its improvement (on-demand ISR) — significantly simplify it.

Next.js Today

No matter how many problems I encountered, next.js continues to be the most technologically advanced framework. It’s a multitude of useful functionality that covers the absolute majority of needs. And with each update, this coverage area only grows (precisely in terms of size, not quality).

The most important update of recent versions, perhaps, can be called server components, even despite the fact that this is the development of the react.js team.

Server components provide the ability to perform complex logic at the build stage without additional overlays. With them, it is not necessary to throw large packages to the client, worry about secrets or optimize translations — this will be the server’s responsibility.

Despite all the usefulness of server components, they have one huge minus for use in real projects — the lack of contexts, due to which data has to be thrown tens of levels of nesting, including paths and page parameters. Partly this was circumvented by making the next-impl-getters package, but, of course, such functionality should not exist separately from the framework.In addition to participating in the development of server components, Next.js develops and supports dozens of its solutions, and if we talk about version 14, then it is exactly the App Router — a new approach in routing configuration, which combines the possibilities of build, server, and runtime.

App Router

An important feature of Next.js is its routing — or rather, the use of the file system as a configuration of this routing. Previously, everything about this was simple — the path to the file where the page is located is the path through which it will be available on the site. In the new version, routing has become more complex, and the first thing that violates this logic is groups.

Groups

Groups allow you to combine pages by any feature. This allows, for example, to create different layouts for pages at the same level.

Layouts and Templates

Layouts are a kind of wrapper for all pages below in the directory — this is an alternative to the previous abstractions _app and _document. An important feature of them is that they are not re-rendered when switching between pages. This is also their minus — if the common layout has any difference depending on the pages — it will not be possible to use this abstraction.

Unlike layouts, templates are rebuilt every time. However, dynamic parameters cannot be obtained in them (the package solves this problem, but again it should be out of the box). As a result, both of these abstractions cover far from all cases and it is necessary to add a common component to all pages.

Parallel Routes

Parallel routes are a mechanism that allows you to load several independent and independent slots on one page. They can have their own templates, error handlers, and loaders.

Intercepting Routes

Intercepting routes are used in conjunction with parallel ones. They give the ability when a user moves from one page to another, to “intercept” it and instead of loading a full-fledged page to display something else, for example when moving to an image to intercept this and display a modal with it, and after reloading to show a full-fledged image.

Other Abstractions

For directories, they also added the ability to configure errors and loaders (and dynamic parameters are also inaccessible in them). This works both with full-fledged pages and with parallel routes.

Working with data and queries

Another noteworthy change is the rework of fetch (again). Now, calling fetch triggers a next.js wrapper, which modifies the request, processes it and caches the result. All this happens incrementally, i.e. the last saved response is immediately returned upon request, and the request for current data is executed in the background.

Much of the fetch rework is tied to the fact that during the build, Next.js simultaneously builds dozens of pages, which often call the same requests. Previously, to avoid this, you had to write your own custom loader class and use it everywhere.

Caching was also used in many other places, such as processing and rebuilding pages, handling routes, redirects and rewrites, determining request status, etc.

I don’t know why libraries have recently started taking on extra responsibility — whether it’s Next.js caching or working with forms by React.js. It’s a strange attempt to fixate what used to have hundreds of variations, each with its own peculiarities, assuming it will be better. Subsequently, the Next.js team added the ability to disable caching or configure it independently, but only for part of the functionality.

Conclusions

Next.js has been enriched with capabilities and optimizations and now covers even more situations. However, without server contexts and dynamic parameters in many abstractions — the potential is significantly curtailed.

The rework of the basic API — fetch — and enhanced caching have made it possible not to worry about performance, but still only in small projects, in large ones you often stumble upon the shortcomings of this solution (eg cached rewrite when not needed, redirects wrong, caching limit, returning an outdated page despite disabled caching).

The Next.js team has started paying more attention to bugs, but to a large extent this has happened due to the sharp increase in these bugs. On the whole, however, the new functionality is largely complete. Another question — are you satisfied with the logic embedded in it and there is no definite answer here. Some people found these changes and their problems critical, so they continue to use the Pages Router, some found it insufficient and had to use a number of hacks (and I am in this group), and some people find these solutions perfect, as the project does not fall under problem areas.

Opinion on the constituents of the dispute

I read Lee Robinson’s arguments several times, but to my surprise, I didn’t find answers to many questions, so I’ll probably go through the list of problems from Kent (which I didn’t find the answer to in Lee’s article):

1. “instead of recommending using the web platform’s Stale While Revalidate Cache Control directive, they invented a highly complicated feature called Incremental Static Regeneration (ISR) to accomplish the same goal”

Why is this done so? The fact is that cache-control is a header, the responsibility for which lies with the browser and, if we passed stale-while-ravalidate with a value of 1 day, then the browser will not update the cached for the current user for a day. In the paradigm of next.js, the cache can be updated not just at a specific moment immediately for all users, but at any moment by calling the on-demand ISR functionality. Among other things, this caching logic applies not only to the browser, but also to requests from the server or during build step.

The storage can be a file system, cloud storage, or user settings. So despite the similarity of ideas, their logic differs (and in my experience, in a better way in this context).

2. “OpenNext exists because Next.js is difficult to deploy anywhere but Vercel”

Next.js itself is incredibly easy to deploy — yarn build, yarn start. That’s all. No special environment features, no secret dependencies. OpenNext is trying to replicate Vercel capabilities, no more.

At the same time, it is worth recognizing that in next.js there is some dependence on Vercel. For example, I mentioned above that cloud storage can be used as storage. And inside next.js there is logic for using Vercel’s cloud storage. However, by default, the file system is used, and if desired, you can easily configure your own cloud storage or, for example, connect Remix (which was suggested in the latest release). There are a few such places, but they are there, in any case for all of them there is a default option, which is more than enough.

3. “Vercel is trying to blur the lines between what is Next.js and what is React. There is a lot of confusion for people on what is React and what is Next.js, especially with regard to the server components and server actions features”

Despite the tremendous efficiency of this collaboration, I have to agree that the line between next.js and react is blurring — if before you could say that Next.js is a test environment for React.js, now it feels like Next.js themselves are promoting ideas in react to use them. It is next.js who talk about server components and server actions, from which it creates the feeling that this is their development. Perhaps the upcoming React Conf will correct this situation.

Three key React.js developers were hired at Vercel — Andrew Clark, Sebastian Markbåge and Josh Story. However, we’ve heard about server components for a very long time and the work began before these developers moved to Vercel.

4. “Next.js violates this principle in many ways. One example of this is the decision to override the global fetch function to add automatic caching. To me, this is a huge red flag”

Here was an answer from Lee: “In Next.js 14, for example, if you want to opt out of caching, you would use noStore() instead of [option] cache: 'no-store' at fetch".

But I don’t understand how this answers the problem. I also don’t see objective reasons why the Next.js implementation was not exported to a separate API, f.e. a fetchNext() with a linter warning when using a regular fetch (just like they did with the Image tag instead of manual img).

5. Stability and Complexity

Stability is dedicated to a significant part of the article above. In terms of complexity, I want to once again acknowledge the amazing documentation of next.js, where it is very easy to search for answers. I see how PR is offered to the next.js documentation every day, even for a minor difference in wording, making it not just convenient, but also as clear as possible.

Postscript

In addition to next-impl-getters I started working on other wonderful packages:

next-impl-config — next.js essentially works in 4 environments — build, server, client and edge, with configuration described only for two of them — build and server. This package gives the opportunity to add settings for each possible environment.

next-classnames-minifier — due to the peculiarities of caching next.js it is difficult to configure class compression to symbols (.a, .b, …, .a1) and to solve this task this package was made, which was dedicated to the recent article.

next-translation — I never really liked existing solutions in the context of next.js and they stopped liking them even more now, with the advent of server components. This package was designed primarily with server components in mind and maximum optimization (due to the transfer of logic to the assembly stage and/or server side).

UPD: Added server contexts to next-impl-getters.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreieg546lag6g5xaj75xe3tkpb6wz6f7frgxjvhawec5ijawoc2uxde@png" type="image/jpeg" /></item>
		<item><title><![CDATA[History of Vercel (2/7). LearnBoost. A leading tech company]]></title><link>https://alexdln.com/blog/vercel-learnboost-start</link><guid isPermaLink="true">https://alexdln.com/blog/vercel-learnboost-start</guid><pubDate>Wed, 24 Jan 2024 21:05:00 GMT</pubDate><description><![CDATA[The first startup, investments, developments. An educational platform that became a leading tech company of its time.]]></description><content:encoded><![CDATA[The first startup, investments, developments. An educational platform that became a leading tech company of its time.

LearnBoost is an online tool that allows for efficient classroom management. It provided teachers, students, and parents with the ability to view grades, check attendance, access various reports and plans, as well as communicate and edit these and other platform entities at any time.

LearnBoost addressed the challenges faced by each participant in the educational process, complementing it with quality, convenience, and speed, giving it a significant advantage over existing competitors at that time. Many paid solutions still cannot match the level provided by this platform’s free version. Despite this, LearnBoost ceased its operations in 2019.

In this section, we will discuss how and by whom this company was created, as well as its principles and achievements.


Background and Introduction

To understand the reasons for creating and the potential profitability of this startup, it is necessary to delve into the field of public services by 2010. The largest university in America in 2010 was the University of Phoenix, which had about half a million students and earned $4.5 billion in revenue. The main distinction of the university was that a significant portion of its education was conducted online.

During those years, government institutions were undergoing a painful and expensive transition to electronic records. The goal was to create information systems for educational institutions that would allow for the following tasks:

In 2010, an attempt was made to standardize the education process by establishing new requirements for knowledge and assessment called the Common Core State Standards. This standard was adopted by 41 states and the District of Columbia.

To address these tasks, online systems were created to provide teachers, administrators, parents, and students with access to this data from any computer. The first companies in this market were:

These companies took the lead in the market, but their pricing policies were extremely high, their development was slow, and their services were inconvenient (according to Rafael Corrales).

There was also at least one free application for gradebooks on the market — Engrade, created by a group of wealthy internet entrepreneurs from San Diego and had more than 250,000 users. Another inexpensive competitor was MyGradebook, which offered limited functionality compared to PowerSchool for $50 per year.

In 2010, LearnBoost entered this market, created by Rafael Corrales, Thianh Lu, and Guillermo Rauch.

Formation of the Company

LearnBoost was created simultaneously on two fronts — on one side was Rafael Corrales, a second-year student at Harvard Business School (HBS), and on the other side was the collaboration between Guillermo Rauch and Thian Lu. They were complete strangers, but driven by the common goal of improving the field of education.

Rafael Corrales

Rafael, like many other HBS students, was enthusiastic about starting his own startup even during his studies, and the field of education seemed most suitable for him. The story of Rafael’s startup development can be traced back to 2009 when he was in his second year of graduate school.

“I went into my second year and said I wanted to do something in the education space, at the intersection of education and technology… Essentially, I bought ‘The Four Steps to the Epiphany’ and followed all the steps outlined in the book. I sat down with 30–40 different teachers, administrators, and people who were experts in the field of education.”

By understanding the needs of the teaching staff, he developed a plan and, before approaching investors and seeking co-founders, decided to develop a prototype of the application. Rafael took loans (from HBS) and when the debt became too much, he started paying out of his own pocket. He hired two inexpensive outsourced developers recommended by an acquaintance, who developed the first prototype. The alpha version was only a minimally viable product (MVP) and only included a gradebook.

According to Rafael, the reasons for success in attracting investors were:

One of Rafael’s professors, Karim Lakhani, took an interest in Rafael’s endeavors, supported him, provided advice, and introduced him to his friend, Harper Reed. Reed became a mentor and angel investor at the idea stage and introduced Rafael to his friends, one of whom was Babak Nivi.

In 2010, Nivi was one of the co-founders of AngelList — a service that connects venture capitalists and angels from around the world.

These angels made a significant contribution to the development of the business. They were valuable not only as investors but also as mentors who could provide practical advice, help solve problems, and attract new investors.

Harper Reed served as the Chief Technology Officer for Barack Obama’s presidential campaign from April 2011 until the elections in November 2012. A central component of this work was the Narwhal project, a centralized database for the campaign. Reed helped assemble a team of developers from technology companies such as Twitter, Google, Facebook, Craigslist, Quora, Orbitz, and Threadless. The program aimed to connect all the information about voters so that every collected fact would be accessible to all campaign branches. This information significantly enhanced campaigning capabilities.

Guillermo Rauch and Thian Lu

Together, they created LearnBoost, an educational platform for middle schools, a web-based software for students, parents, teachers, and class administrators in grades K-12. It included real-time collaboration tools, online gradebooks for teachers and parents with quick access to grades and student performance tracking.

Guillermo was responsible for the technical aspects, while Thian focused mainly on interface design and product design. They were based in the DogPatch Labs laboratory. Ryan Spoon, the head of DogPatch in San Francisco, wrote about them in his blog and considered their venture very promising.

DogPatch Labs was a business incubator in its best manifestation and certainly deserves attention in this section.

DogPatch Labs

At that time (2010), the San Francisco office housed about 65 entrepreneurs, and including participants from other regions and alumni, the total number was around 300. This list largely consisted of participants with experience working at Google, Yahoo, eBay, Microsoft, Zynga, Slide, Facebook, Imeem, and other IT industry leaders.

During the “training” period (6 months), startups enjoyed the following privileges:

1. Community

2. Access to investors

3. Mentoring

4. Events

5. Office and its location

6. Perks

Introduction

As mentioned earlier, Ryan Spoon wrote about LearnBoost in the laboratory’s blog post about the launch of a new batch of residents, among others. Some of Rafael’s acquaintances were following the laboratory’s activities and, after reading this article, told him that this company was doing something similar to what Rafael was doing. By that time, Rafael had already attracted investments for the alpha version of his product and planned to close the round in the near future. He wrote emails to Guillermo and Thian, inviting them to meet and discuss the field of education in general. The expected 15-minute meeting to talk about education in general turned into several hours of brainstorming about what the three of them could do together.

Their collaboration bore fruit by July 2010. In the next round, they attracted four venture investors.

Business Component

Paid products, albeit inexpensive, condemn government institutions to endless bureaucracy. Raphael’s business plan consisted of attracting as many users as possible through a free tariff with basic functionality, which in turn would recommend the purchase of additional features to management and also tell their acquaintances about the service. Thus, the product, creating a viral effect, could spread proportionally. In essence, Raphael planned to use the viral marketing strategy that David Skok would formulate in the following year, 2011.

Initially, the service was only available in English and Spanish. LearnBoost quickly entered external markets and by 2011 it supported 5 languages (English, French, Spanish, Portuguese, and Dutch), by 2012–14 languages, and by 2013–21 languages. The platform also had full integration with Google Drive and Google Calendar services.

Investments

LearnBoost received its first investments from investors on the AngelList, namely from George Zachary (invested in Twitter) and Jeff Fagnan (invested in Songbird). George later introduced other members of the list to this startup, who also invested: Bill Lee (investor in Tesla Motors), James Hong (investor in Slide), Othman Laraki (founder of Mixer Labs). There were also investments from individuals who were not members of this list — RRE (investor in Venmo) and Bessemer (investor in Postini).

In July 2010, the company received investments of $975,000 from Bessemer Venture Partners, Charles River Ventures, RRE Ventures, and Atlas Venture. In May 2011, the company raised $1.9 million.

The next time the company will appear in the news of the big market will be in 2013, but before that, Guillermo had another startup in his life — CloudUp, which he and his team actively worked on for the past year.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreiedhf53jarfxkdyh4svwhb6xnxtuysp4gh2jxc7bplf3pdbl2ebj4@png" type="image/jpeg" /></item>
		<item><title><![CDATA[History of Vercel (1/7). 1990–2009. Guillermo Rauch. Childhood and first steps in programming.]]></title><link>https://alexdln.com/blog/vercel-guillermo</link><guid isPermaLink="true">https://alexdln.com/blog/vercel-guillermo</guid><pubDate>Wed, 17 Jan 2024 21:07:00 GMT</pubDate><description><![CDATA[He was born in a small town in Argentina, did not finish school, went to work in Switzerland at the age of 17, and emigrated to the US at 18 to start implementing his ideas as an entrepreneur.]]></description><content:encoded><![CDATA[Perhaps the most favorite question in interviews — “How did you get into programming?”. Without a doubt, Guillermo Rauch would have found an answer to this question.

He was born in a small town in Argentina, did not finish school, went to work in Switzerland at the age of 17, and emigrated to the US at 18 to start implementing his ideas as an entrepreneur. He has founded several successful startups, created next.js, now.js and CLI hyperterm utilities, socket.js, and dozens of other open-source projects.

He has already presented several releases of next.js, and his company Vercel, after rebranding, attracted $313 million in investments and received a valuation of $2.5 billion, making it the 6th unicorn with Argentine roots.

This is an amazing and inspiring story, and in this part, I will tell you how it all began.


Family and Homeland

Rauch Guillermo Federico was born in Lanus in December 1990, in a family of engineer-technologists and chemists.

Lanus is part of the Buenos Aires conurbation and is also an industrial center. The city has chemical, military, textile, leather-rubber, and dozens of other types of industries. The city is home to several technical schools as well as a medical center.

Argentina was in a crisis at that time. The new government launched massive reforms (which even led to an attempted military coup in 1990), including a currency reform. These reforms quickly pulled the country out of the crisis, and by 1995, the inflation rate had normalized (4%).

Guillermo has an older brother, Ricardo Rauch, who later became the designer of two mega-unicorns in a row — Auth0 and Scale.

Education

In the early years of his education, he attended the Jose Manuel School, which was located near his home. During high school, he transferred to one of the most prestigious schools in Argentina — the Carlos Pellegrini High School, where his brother also studied.

“El Pellegrini introduced me to things that, perhaps, in another universe, I would have learned at university.”

During his school years, he was a fan of The Beatles and a regular customer of McDonald’s, loved mathematics and foreign languages. He was able to learn programming and English online but did not skip history and Portuguese since he left school at the age of 17.

In 2007 (when Guillermo was 16–17 years old), unique events took place at the Carlos Pellegrini School. After the resignation of the rector — Abraham Leonardo Gak, and the arrival of Juan Carlos Viegas, a crisis began at the school, resulting in the school being seized with the demand for a change of rector. During Juan Carlos Viegas’ tenure, the school received over 80 bomb threats.

First Steps in Programming

In 1987, one of the most famous TV series in America was captured — ‘Star Trek: The Next Generation’. This part became the most popular in this universe, and a whole subculture grew around it — Trekkies. This series came to Argentina only in 1990 and, like in the rest of the world, it became very popular.The first computer appeared in Guillermo’s family when he was 7 years old. It had Windows 95.

His father was all about engineering and watched Star Trek. He understood that the future was with computers, so he bought new and advanced things for his family. He was also subscribed to the PC Users magazine. Occasionally, various components came with the magazine for subscribers.One day, Guillermo’s family received a CD-ROM on which Red Hat Linux was recorded. It interested Guillermo’s father, and he said they should try it out. It was a very early version, but the installation process was accompanied by a graphical interface, which made it painless.

The average nominal salary in Argentina at that time was ~$8200/year, which is less than $700/month. This amount also had to cover taxes, expenses, and support for two children. Computers at that time cost over $1300.Starting with exploring alternative operating systems, Guillermo became interested in performing operations in the console (and this interest has remained throughout the years). Since the Red Hat Linux version was not yet stable, there were problems with the internet, and in the beginning, Guillermo was busy fixing it. Then he communicated on various IRC forums, learning new things and sharing ideas. He then tried Debian (and he liked this system).

In 1998, Broadband Internet began to be gradually deployed in Argentina.At the age of 11, when time was divided between games and programming, one of Guillermo’s main hobbies was emulating games in Linux — installing Wine, configuring it, and optimizing it to the maximum metrics. He collected configurations, installed modules, rebuilt the kernel, looked at metrics, and then returned to debugging or, after much persuasion from his mother, finally gave the computer to her so she could send an email (in cases when he didn’t forget to add the ethernet module).

But the major turning point was the introduction to JavaScript (for which the ES3 version had just been released).

First Job and First Experience in Open Source

According to Argentine legislation, it was possible to get a job only at the age of 14 (currently 16).By the age of 13, he had mastered JavaScript to such an extent that he started working as a freelancer abroad and earned $1000 per week.

At the same age of 13, Guillermo met Richard Stallman (the main evangelist of the free internet, founder of the free software movement, GNU project, Free Software Foundation, and the League for Programming Freedom) in Buenos Aires, where Stallman gave a lecture on “Free Software and GNU/Linux.”

Since 2006, Guillermo had been running his blog. He used WordPress and the theme “Peaceful Rush” to create the blog. Also, since 2006, he had been working with MooTools — a newly created promising library. He actively participated in open source and answered questions on “a Spanish service similar to Stack Overflow.”

In 2007, at the age of 16, he developed a plugin called FancyMenu, thus becoming a core developer in MooTools. In the same year, he developed a plugin for WordPress — WP-o-Matic — a kind of admin panel for collecting articles from different blogs into one blog [read more].

Design for the plugin developed by Ricardo Rauch.

A year later, at the age of 17, thanks to the recommendation of another core developer, Aaron Newton, Guillermo was invited to work at a startup in Lausanne, Switzerland.

First Experience in Startups

Faced with the choice between education and starting work at startups, he chose the latter. He left school and decided to go to Switzerland.

He was also wanted by another company, already well-known to the public — Facebook.The company was developing a product on the MooTools framework and was looking for developers familiar with it. The most obvious way to find them was the list of core developers.

In 2008, GitHub was introduced, which updated and changed the world of open source.Companies created in those times worked side by side on new libraries, developing them and building communities. Collaborative development on many of them gave Guillermo many interesting and useful acquaintances for the future.

Two years after Guillermo’s invitation, the company decided to open its business in the US, in the mecca of IT startups — Silicon Valley (San Francisco).

Further Path

Seeing how the company achieved such a level in just 2 years, Guillermo realized that it was possible to create and develop a company from scratch, and that he could go this path and create something of his own.

At the age of 18, Guillermo emigrated to San Francisco, California, in 2009, where he decided to start his own path as an entrepreneur.

In California, together with partners, he started working on his first startup.

The next part will be published in a week, on November 24th.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreic6freuprcaufclqvolnapykpskjonor4l6ibvrouuxbiv5ldua5a@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Compression of css classes. Next.js. Let’s make the web Even faster.]]></title><link>https://alexdln.com/blog/css-compression</link><guid isPermaLink="true">https://alexdln.com/blog/css-compression</guid><pubDate>Mon, 15 Jan 2024 21:09:00 GMT</pubDate><description><![CDATA[For many years, there have been debates about how best to name classes — according to BEM, by objectives, by components or however you like, but with the addition of a hash. And this is indeed an important question, which method will be comfortable in the development of a large and evolving project. But, what do these methods mean for the user, does he need these classes and how are they related to his experience?]]></description><content:encoded><![CDATA[For many years, there have been debates about how best to name classes — according to BEM, by objectives, by components or however you like, but with the addition of a hash. And this is indeed an important question, which method will be comfortable in the development of a large and evolving project. But, what do these methods mean for the user, does he need these classes and how are they related to his experience?

Sometimes, when I go into the styles of projects, I involuntarily get scared of the formed length of names — module, block, element, subelement, modifier 1, modifier 2. BEM is really great and I don’t intend to deny it, but its sizes leave much to be desired.

Long classes increase the weight of the page, this in turn means an increase in the loading time of the most important for rendering the page — the document and the style file, which directly affect the FCP (First Contentful Paint), LCP (Largest Contentful Paint) metrics.

This has become one of the reasons why I have been looking at modules for a long time (in addition to isolating styles and storing them where they are used).

Modules allow you to name classes shorter, only for the current component, while maintaining the convenience of development. But now hashes are added to the classes, making them longer, so the advantage is not as much as one would like. Therefore, finally, to the topic of the article.

Compression of class names

So, what are the methods to shorten classes:

The first way is not suitable for something bigger than a to-do list — making classes too short we either lose DX, or risk making intersections.

For the second and third methods, css-loader offers the localIdentName property for modules.

localIdentName: "[path][name]__[local]--[hash:base64:5]"localIdentName: "[hash:base64]"

The most optimal compression

By choosing a rule, you can significantly reduce the size of classes, but still, significantly is not equal to maximum. The maximum reduction of class names will be reduced to characters — .a, .b, .c, ..., etc.

This approach is used, for example, by Google, Facebook, Instagram

To implement such a solution, we are interested in the getLocalIdent property, which allows you to pass a function for generating a name. You can also use packages such as posthtml-minify-classnames or mangle-css-class-webpack-plugin.

The article could have been finished at this point, if it were not for one detail. I use next.js.

Solution

Next.js has several features that do not allow these solutions to be used. The most obvious feature is that it does not allow you to configure getLocalIdent from the outside.

That’s why I, 3 years ago, made a package — next-classnames-minifier. In it, I implemented a name selection algorithm and set up the embedding of getLocalIdent in the necessary rules in webpack. Over the following years, the package was slightly updated, but there was something in it that didn't allow me to call it completed and ready for use in commercial projects.

The main problem was the need to delete the folder of the built application and cache in ci every time, which, of course, greatly affected the convenience of development. And this is the second feature of Next — its caching system.

If a component was built, then at the next launch of the development mode or assembly — it may not be rebuilt. That is, when restarting, the algorithm started to work from scratch and generated classes with the simplest names (.a, .b, .c), but in a number of non-rebuildable components and styles such names were added at the last launch.

For this reason, the solution is not included in next.js

Make friends with Next.js

Obviously, it was critically important to get rid of the cache clearing problem. And the solution was found. Now the package, like next.js, caches the results — generated names — and at each start restores them from the cache, analyzing them and checking their relevance.

At the same time, the assembly speed did not become longer, since the package uses the same optimized name selection algorithm, and due to caching the package works even faster [than the basic creation of names with hash generation].

Efficiency

You can find articles with compression efficiency of 30%, 50% and even 70%. In reality, everything is very individual. For example, if you had a class:

.postsListItemTitle {
	font-size: 24px;
}

From it you get:

.j1 {
	font-size: 24px;
}

21 characters (.j1{font-size: 24px;}) instead of 44 (.postsListItemTitle__h53_j{font-size: 24px;}) - savings of 52%. This class is used in 20 cards on the page, which reduces the weight of html as well.

On average, however, one can speak of a reduction in the weight of css by 10–20%.

next-classnames-minifier — let’s make the web Even faster.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreicb6fqpmgbufwmk33goe2ovkdht5mmbfzk5iw57nfc7omxk5aoeaq@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Next.js v13: What's New and What's Coming]]></title><link>https://alexdln.com/blog/nextjs-v13</link><guid isPermaLink="true">https://alexdln.com/blog/nextjs-v13</guid><pubDate>Mon, 24 Oct 2022 18:50:00 GMT</pubDate><description><![CDATA[Next.js is the largest framework for web application development. It was created six years ago, on October 25, 2016. Since then, 12 major releases have been issued, making the web faster and faster. Despite the framework’s complexity, the size of each subsequent release did not decrease, though the pace of updates did slow down.]]></description><content:encoded><![CDATA[
On October 25, the Next team will hold a presentation dedicated to the new, 13th version. As is tradition, this update is being called the biggest yet. It could affect literally everything—from further build speed improvements to changes in the application structure and new abstractions.

Next.js is the largest framework for web application development. It was created six years ago, on October 25, 2016. Since then, 12 major releases have been issued, making the web faster and faster. Despite the framework’s complexity, the size of each subsequent release did not decrease, though the pace of updates did slow down.

Three years ago, the Next.js team held its first release-focused conference and made it an annual event. The exception was the presentation of version 11, which took place in June 2021. This article will discuss what features were completed in the latest updates and what we can expect at tomorrow’s conference.


About the Conference and Release

This is the first conference that will also be held in person, in San Francisco. At the same time, over 90,000 users have registered for the online conference, eager to find out live what exactly the Vercel team has prepared.

Since the last major release, three minor versions have been released; the last time this happened was with version 9, for which a total of five minor versions were released. This is largely due to the fact that the previous version was particularly notable for the amount of unfinished features released in alpha and beta versions. In the minor updates, all features were gradually finalized and moved to the stable API section.

The main non-technical change is the new Next.js logo, which has become simpler and clearer. It is now shorter, making it more convenient to use on websites and in images. While the width remains the same, the height has been reduced by a factor of three.

Layout RFC

In May of this year, Next.js unexpectedly released an RFC (request for comments). It primarily discusses a new abstraction—layouts—as well as a host of related changes aimed at speeding up development, improving the developer experience (DX), and standardizing practices through new conventions.

This working proposal literally describes the future look of the framework and turned out to be so comprehensive and ambitious that I decided to write a separate article about it - Layout RFC.


Middleware

One of the biggest updates in the previous version was middleware. It was introduced as a beta feature. It was added to handle user requests in the edge runtime, including performing rewrites and redirects. Initially, the files were named with an underscore at the beginning (_middleware) and stored in the root directory of the pages.

Starting with version 12.2, the file is simply named middleware and is stored in a folder one level higher, with only one file per application (previously, there could be multiple such files). The file itself specifies the exact paths for which it should operate. This change is explained by performance improvements and support for the upcoming Layout RFC.


On-Demand Incremental Static Regeneration (Stable)

Incremental Static Regeneration is an approach to page generation that allows pages to be updated incrementally (i.e., after the application has been built and is already running).

Version 9.5 introduced ISR, which worked according to the following principle: a time interval was specified for a page, and the page was updated no more frequently than this interval after a user request.

That is, the page was generated → users see the generated page → after the time interval elapsed, the first request triggers a page rebuild, but that user receives the old version → after the rebuild is complete, users will receive the new version, and the interval starts over.

In version 12.1, a new approach to incremental regeneration was introduced as a beta feature. Now, in any API, you can trigger a page rebuild by calling a method on the response object (res.revalidate("/path/to/page/")). In version 12.2, this feature has been marked as stable.


Improved SWC Support

Build acceleration is an integral part of every Next.js release. This time, the build process has been accelerated by another 40%. Minification, meanwhile, is now 7 times faster.

Despite this significant speedup, minification with SWC is disabled by default, and a special flag - swcMinify - must be added to the config to enable it. Starting with the next release, this feature will be enabled by default.


Separate Folder

To run the application, a number of necessary files must be transferred to the server—packages, public files, and the built application. The latter, in turn, contains many files unnecessary for running the application, which take up a lot of valuable space.

The goal of the new feature is to create a standalone folder that contains all the files necessary for the application to run and nothing extra. It is enabled by adding anoutputkey with the valuestandalone to the config.

However, this folder is not entirely standalone, as you will subsequently need to move the modules and the public folder into it, as shown in the example from Next.js in the Dockerfile.


Images

Next.js uses its own component for images, which can dynamically compress images and apply the necessary styles to them. It has been constantly updated, sometimes improving workflows and developer experience, and other times speeding up the compression process.

A new component for working with images will be added in the next release. Currently, it is located atnext/future/image. In the next release, it will be moved tonext/image, while the old component will be available atnext/legacy/image.

This could work for all images, including those located outside the site, so it was possible to specify from which domains images could be loaded and, consequently, compressed. To do this, theimages.domains key was added to the config. In recent updates, theimages.remotePatterns key was added, which, unlike the previous option, checks for a match rather than an exact match.

It is now also possible to disable image optimization entirely by adding theimages.unoptimized flag to the config.


React Support Improvements

The Next.js team always prepares for future React.js releases in advance. For example, support for React v18 was available even before the official release, and server components began testing before their final design was finalized. In addition to Next.js functionality, the new release also describes server components, which will now be fully integrated into the process.

In the latest release, Next.js introduced a number of React-related updates available in the alpha version. These include Server-Side Suspense, Streaming SSR, and server components. In version 12.1, this functionality was moved to the beta version. In the next version, these features will likely be moved to the stable release alongside the release of the Layout RFC.


Other changes

Middleware running in the edge runtime has performed well, and the Next team has continued its development by porting its functionality to it. Thus, version 12.2 added support for API routes and page generation in the edge runtime, and edge SSR was also configured.

Another welcome change is that the<a>tag is no longer required as a child element for the links component. This feature can now be enabled in the config, and starting with the next release, it will be enabled by default.


Additional Utilities

A new utility has been created to generate OG images used for previewing links on social media and messengers when they are shared. Images are generated 5 times faster than alternatives and perform 40% better.

To enable this feature, a special API route is created that returns JSX with the markup for the future image.

This feature is already being used for all preview images on Vercel pages and in conference tickets.

You can read more in this article from Vercel.


Vercel Platform Updates

Changes to the Vercel platform are also noteworthy. One of the most interesting updates to the platform in recent years is Vercel Preview, which allows you to view sites in collaboration mode, edit their code, and write comments.

Since the latest release, the platform has improved the commenting process, making it more user-friendly, and has added screenshots, notifications, and full synchronization with Slack.


Community

Next always pays attention to its community, thanks to which it is growing and developing so actively. Every update includes words of gratitude to the community, and with each release, they like to share the community’s growth - and I think this release will be no exception, since the number of members has grown from 1,800 to 2,300 (a 30% increase).

Since the last release, the first Developer Survey has been conducted. The Discord Community has also been improved.


Conclusion

The latest version was released just 4.5 months after the previous one. It contained many changes, though most of them were in the alpha and beta testing stages. As a result, most of the subsequent updates focused specifically on refining existing features. Despite this, the Next.js team has prepared and released - or is preparing - many interesting new features.

The most interesting and promising innovation is the Layout RFC, which has the potential to significantly improve workflows and solve a number of development challenges.

On October 25 at 7:30 PM CET (8:30 PM Moscow Time), the Vercel team will host a conference to present the new Next.js release. At the event, we’ll learn exactly what features will be included in the new release as ready-to-use functionality, hear about the company’s plans, and watch presentations from developers at Vercel and other major companies.
]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreihgzyxj3bcykhc2mcvzg5trvcer7ijg6w64yz3tbztorhloxndbl4@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Next.js Layout RFC: Changing Everything to Make the Web Faster]]></title><link>https://alexdln.com/blog/nextjs-layout-rfc</link><guid isPermaLink="true">https://alexdln.com/blog/nextjs-layout-rfc</guid><pubDate>Mon, 24 Oct 2022 18:23:00 GMT</pubDate><description><![CDATA[In May of this year, Next.js unexpectedly published an RFC (request for comments) on its blog. It primarily discusses a new abstraction - layouts - as well as a host of related changes aimed at speeding up development, improving DX, and standardizing through the creation of new conventions. This working proposal definitely deserves attention, both because of its complexity for the framework and because it literally describes what it will look like in the future.]]></description><content:encoded><![CDATA[Next.js is the fastest-growing framework. Since its creation in 2016, 12 updates have already been released, each of which the company has called “the biggest.” On October 25, Vercel (the company that owns Next.js) will unveil a new, 13th release, which, of course, will once again be “the biggest.” However, this article isn’t specifically about that release, but rather about a truly new process for the company.

In May of this year, Next.js unexpectedly published an RFC (request for comments) on its blog. It primarily discusses a new abstraction - layouts - as well as a host of related changes aimed at speeding up development, improving DX, and standardizing through the creation of new conventions. This working proposal definitely deserves attention, both because of its complexity for the framework and because it literally describes what it will look like in the future.


Background

A Request for Comments (RFC) is an official document developed by the Internet Engineering Task Force (IETF) that describes specifications for a specific technology. When an RFC is ratified, it becomes an official standards document [source].

Many libraries have similar RFCs, including React (https://github.com/reactjs/rfcs/), though full-fledged press releases are rarely written for them. Next.js outlined its vision by publishing a blog post five months ago and updating it in September. A discussion was also created on GitHub, where developers were invited to share their wishes and feedback.

This request is being called the biggest update to Next.js. And that’s probably true. The proposal is to change the application structure and introduce new abstractions, but let’s take it one step at a time.


Structure

It is proposed that all pages be stored in the “app” folder. Previously, they were stored in the “pages” directory; this folder will continue to function, but with some limitations. Initially, Next.js will process pages from both directories to maintain backward compatibility and allow for a gradual migration of pages. There is some debate regarding the folder name, as this name may already be in use within the application. It is quite likely that it will be renamed by the time of release.

Another change in naming concerns page names and their placement rules. Previously, there were two options: inside a folder with an index file (/about/index.js) or as a standalone file (/about.js). However, regardless of the option chosen, the page was always rendered as a file (/about.html). The new standard proposes creating a new folder for each page containing a page.js file (/about/page.js).


Layout

Next.js uses a number of abstractions that are automatically bound to the application - these are _app, _document, and _middleware. Middleware was added in version 12 as a test API, and in recent minor releases it was modified and renamed to middleware (without the underscore at the beginning). It is proposed to remove _app and _document, which are used for document and template rendering for all pages. Their role will be taken over by a new abstraction, which is the primary focus of this proposal: layout.

The difference between layout and _document is that styles do not currently work in layout. This issue will be partially resolved by another RFC dedicated to adding support for global styles.

Layouts are divided into root layouts, stored at the top level of the app directory, and additional layouts, stored in subdirectories. The root layout will describe the main markup - html, head, and body - effectively replacing _document. Additional layouts will describe templates within the body for nested pages.

A nice feature of layouts is that you can call getStaticProps and getServerSideProps (to retrieve data during the build phase or when rendering the page on the server) and pass the necessary data to each nested page. Previously, you could only call getInitialProps in _app, though this was not recommended.


Loaders

Rendering some pages, especially in server-side rendering mode and when fetching data, can take a long time. By default, the user sees a blank white screen during this time. This ruins the user experience, and if it takes too long (more than 3 seconds), it often leads to user churn. To make the user experience more pleasant in such cases, it’s worth using loaders and skeletons, which will create the appearance of activity and site availability.

In React, all lazy-loaded elements can be wrapped in Suspense. In Next.js, it’s recommended to create a loading.js file containing the loading component. Ultimately, this will be rendered using Suspense as well, but the page code won’t be cluttered with multiple wrappers.

It will be interesting to see how this functionality works for search engine crawlers (after all, without this, Next.js would immediately serve the static content, and the crawler could scan the site without any issues).


Error Pages

Currently, in Next.js, you can create 404.js, 502.js, and _error.js files for errors in the root directory of the pages. That is, in the static version, there is only one error page for the entire application. This was a problem, among other things, for multilingual sites, where this page needed to be localized in some way. There were also limitations on creating different error pages for sections (for example, for a blog with recent posts and a button to return to the blog’s homepage).

Going forward, it is proposed to create an error.js file for any directory, which will apply to all nested pages.


Grouping Pages

It is not uncommon for pages at the same level to use two different templates or error pages, for example, one set for the application’s functionality and another for product pages. This issue will be resolved in the future through page groups. Pages are grouped into a new subdirectory, and the group name is enclosed in parentheses “(product)”.

If you create groups at the top level of the application, you can use different root layouts within them.


Intercepting Routes

This functionality is based on how social networks work, where clicking on a news item opens a modal window, and when you share a link, the post opens in full-screen mode. This functionality works by intercepting transitions within the directory.

The names of such routes begin with “(..)”; the number of such parentheses at the beginning of the file indicates how far up the directory tree you need to go to reach the desired route. For example, to intercept photos (/photo/) when opening them from an article page (/blog/post-name/), you need to create a directory named(..)(..)photo within the article’s directory. You can define a modal window there, and when navigating from the article page, that modal will open. If you share a link or reload the page, the photo will open as a regular page.


Parallel Routes

Such routes can be used for pages consisting of two complex and unrelated segments. For example, if a page consists of a blog in the top half and an FAQ in the bottom half.

For such a page, you’ll need to create a directory containing layout.js and the segments it will consist of. To create segments, create a subdirectory whose name starts with “@” (e.g., @posts and @faq). Then, these segments will be available as props in the layout and can be embedded into the appropriate parts of the markup.

There can be subpages within segments, and when navigating to them, the segment to which they belong will be replaced in the layout. That is, for example, you can create an article page (/blog/@posts/some-post), and when navigating to it, the @posts segment will be replaced by this page, while the @faq segment will remain unchanged.

However, upon reloading, only the current page is displayed (the @faq segment disappears). This is likely a bug, but we won’t know for sure until the full release of the product.


React.js Features

The Next.js team always prepares for future React.js releases in advance. For example, support for React v18 was added even before the official release of the version, and server components began testing even before their final design was finalized. In the new proposal, in addition to Next.js functionality, server components are also described, which will now be fully integrated into the process.

By default, all pages will be rendered as server components. It is also proposed to name client-side and server-side files with the extensions .client.js and .server.js. That said, React has not yet finalized exactly how server-side and client-side components will be distinguished, but this approach was the most popular, and Next decided to standardize on it with the caveat that, in the event of changes, it will be adjusted according to accepted conventions.


Conclusions

The Next.js team decided to completely overhaul the application architecture by introducing a range of new abstractions and phasing out the old ones. These bold changes have the potential to solve many of the challenges developers face. If all these changes are adopted and work as described, this will mark another major update to the framework and open up significant potential for further development.

A basic example of the functionality described in this article can be found on the article’s website; the code is available on GitHub. A much more illustrative (and high-quality) example is shown in a tweet from Vercel.

On October 25 at 7:30 PM CET (8:30 PM Moscow Time), the Vercel team will host a conference to present the new Next.js release. This RFC will certainly be discussed there, but whether it will be included in the upcoming 13th version as ready-to-use functionality remains to be seen.
]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreidsewjnrtk6zk6ztb7y5gewus4srauozb5gpgwavuhur5fsprane4@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Theming, Part 3: Themeizer - A Young Companion to Styles]]></title><link>https://alexdln.com/blog/themeizer-part-3</link><guid isPermaLink="true">https://alexdln.com/blog/themeizer-part-3</guid><pubDate>Tue, 15 Feb 2022 07:19:00 GMT</pubDate><description><![CDATA[It’s time to acknowledge that theming isn’t about imposing a soulless black-and-white world or catering to personal whims; it’s a crucial step in ensuring service accessibility and maximizing conversion rates]]></description><content:encoded><![CDATA[This is already the third article on a topic that doesn’t exist. The first article was written to describe a useful and interesting feature that also produces beautiful results. Now, however, it’s time to acknowledge that theming isn’t about imposing a soulless black-and-white world or catering to personal whims; it’s a crucial step in ensuring service accessibility and maximizing conversion rates.

If the technical part of the first article focused on the client-side, and the second on the server-side, in this third article I’d like to talk about the difficult journey styles take before reaching the site, and about the companion I created to help them - a friendly guide assisting them at every step, from design to layout. I named it Themeizer, and in this final article of the trilogy, I’d like to introduce you to it, its capabilities, and tell you about how it came to be.

Before we get to that, it’s worth refreshing your memory and reviewing the key points from the previous articles.

Key Points

Part 1. Thematization. History, Reasons, Implementation [article]

Part 2. New browser APIs. Theming with SSR. Choosing between SPA, SSR, and SSG [article]

Additionally, it’s worth paying attention to two more CSS properties

color-scheme

This property tells the browser which color schemes can be used to render an element. It affects user interface elements (colors of fields, buttons, scrollbars, text, background, etc.). Currently, the documentation describes the following possible values:

normal

The default value, meaning that the site does not support color schemes. Browser interface elements will be styled using a light color scheme.

light

Means that the site supports a light (daytime) theme. Browser interface elements will be styled with a light color scheme.

dark

Means that the site supports a dark (night) theme. Browser interface elements will be styled with a dark color scheme.

only

The user agent may forcefully override an element’s color scheme with the user’s preferred color scheme. This property is used to prevent such overrides (support for this value is implemented only in Safari).

<custom-ident>

Any other value can also be specified as a property. All values not described above will have no effect (this logic has been added for future backward compatibility).

In the case of theming, the property must be formatted as follows:

:root {
  color-scheme: light dark;
}

.light-theme-example {
  color-scheme: light;
}

.dark-theme-example {
  color-scheme: dark;
}


Where color-scheme: light dark; means that the page supports light and dark schemes, with a preference for the light theme.

Since the browser needs time to load and apply styles, style flickering may occur when overriding the color scheme. To prevent this situation, you can use the<meta name="color-scheme"> meta tag, which immediately informs the user agent of the supported schemes.

For example, <meta name="color-scheme" content="dark light"> indicates that the page supports dark and light themes, with a preference for the dark theme.

accent-color

This property is used to change the accent color (the primary color) for interactive elements.

Possible values include:

auto – sets the platform’s accent color (if available);

<color> – a color in any format.

The specified color automatically adjusts the contrast of other component parts of the element (text color, background, sliders, etc.). Color variations may also be generated for gradients, etc., to bring the control into compliance with the platform’s conventions regarding the use of accent colors.

The following HTML elements are currently supported:

You can see how this property works on a page created by Joey Arhar – accent-color.glitch.me.

Once you’ve covered the basics, you can move on to the next section—processes and prerequisites.

Steps for implementing theming

To fully customize a theme, you need to take 3 steps:

Strange as it may seem, implementing theming starts with design. This is a very important and quite extensive step, and the later new color schemes are added, the longer this step becomes. Overall, this process can be divided into two main tasks:

The second point may seem unnecessary, but after creating color schemes for each theme, they need to be tested. This can only be done after creating layouts for all pages of each theme. It’s a fairly labor-intensive process, but it’s the only way to ensure that the current color palette looks high-quality everywhere and at all times.

All further design changes must also be implemented in the mockups for each theme. High-quality design systems and components solve this problem perfectly, but building the entire design solely on components is difficult and, perhaps, even harmful.

It’s clear that developing and maintaining variations for each theme is a monotonous process that takes up too much precious time. As much as I love the dark theme, it’s unfair to complicate the designer’s (and subsequently the developers’) work to such a significant degree, so it would be nice to expand the core functionality and simplify the execution of such tasks.

Design and Figma

When reading about design systems, you often come across the term “design tokens.” These typically refer to colors, typography, dimensions, effects, and other values. The standard structure of design tokens is a key-value pair. In the context of Figma, these can be described as objects that define various styles.

Figma Tokens

Despite the general concept (all styles are described as objects), Figma tokens do not have a uniform structure. For example, a color object looks like this:

{
	description: "",
	id: "S:8c92367364cb87031fe4e21199c200a3f8c79dd9,",
	key: "8c92367364cb87031fe4e21199c200a3f8c79dd9",
	name: "dark/primary",
	paints: [
		{
			blendMode: "NORMAL",
			color: {r: 0.3999999761581421, g: 0.7120000123977661, b: 1},
			opacity: 1,
			type: "SOLID",
			visible: true
		}
	],
	remote: false,
	type: "PAINT”
}


Most other objects (typography, effects, etc.) will have nothing in common with this object’s structure, except perhaps the name and identifier. To be more precise, these are not objects themselves, but references to them. Thanks to this (or because of this), changing a style in one place will update it everywhere it’s used.

Since the main goal is to make life easier for designers when implementing theming, colors are the most important tokens to focus on. Now that we understand what exactly needs to be modified, we need to figure out “How?”

Figma for Developers

To do this, let’s turn to the Figma documentation in the “Figma for developers” section. According to it, the following options are available to developers:

Widgets and integrations are of little interest in the context of this task, but the other two options are worth considering.

REST APIs

www.figma.com/developers/api

This interface provides read access and interaction with designs in Figma. Thanks to this API, you can retrieve all design nodes and extract styles and their properties from them. Currently, the Figma APIs allow you to retrieve “files” (objects describing the entire layout’s content) and their versions, images, users, comments, and styles, as well as submit comments on files.

Despite this impressive list of capabilities, this interface does not provide complete freedom of action, and along with important information, it returns a lot of useless information (in the context of theming).

Things are much better with plugins.

Plugins

www.figma.com/plugin-docs/intro

Plugins are web applications that extend Figma’s functionality. They can read and modify node styles, such as color, position, effects, typography, etc.

Plugins are limited to the scope of the current design; that is, they can track the selection of elements, but they cannot track the removal of styles, nor can they access styles from other projects within the current organization.

After diving into the topic of extending Figma’s functionality, we can return to our original goal: to make life easier for designers and developers.

Optimizing processes during the design phase

Creating a copy of each page for every color scheme, as mentioned at the beginning of the article, is too labor-intensive a process. Therefore, it’s worth starting the optimization right here.

Several key conditions can be identified that the final solution must meet:

Key requirements:

The Themeizer Plugin

To ensure ease of use, the solution should not create its own API and subsequently impose it; it should merely complement existing functionality. In Figma, style folders are typically used to create color schemes.

Therefore, folder names can be considered theme names, and all styles within them - the colors of the current scheme. Plugins provide access to the values of these styles and allow them to be modified. This means that from the plugin, you can retrieve any theme and change all styles in the mockups to a different theme. Additionally, there are situations where a page needs to display designs in different color schemes (for example, to present a new concept); in such cases, you need to change styles not across the entire design, but in individual frames (such as selected ones).

This is the first task the plugin solves

However, this comes with some additional requirements. Not every folder in a design is a theme; often there are separate folders for general styles, button elements, or branded elements. Therefore, the plugin needs additional settings where you can select themes.

All themes that are not selected as light or dark are considered shared, meaning they are available to any theme.

The plugin knows all color schemes. This means it has access to the primary source of truth, and if so, this is an opportunity to make this source not just the primary one, but the only one.

Exchange with the client side

To make Figma styles the sole source of truth, you need to create a shared repository for both the design team and the client. To do this, all styles must be uploaded to the server.

As mentioned earlier, the plugin shouldn’t impose anything, so anything can serve as the server. The only rule is that the address must accept a POST request. This address must be specified in the plugin settings, and later, upon publishing, a request containing topics with the following format will be sent to this address:

{
  [theme: string]: {
    list: [
      {
        name: string,
        value: string,
        type: "solid" | "linear" | "radial"
      }
    ],
    type: "light" | "dark" | "shared"
  }
}


Still, before uploading to the server, it’s worth reviewing all changes and making sure everything is correct. Plugin APIs for such tasks provide access to client storage (an alternative to local storage) and to storage within the current theme. Client storage is definitely not suitable, since several people are working on the project and each of them must have access to the saved data. Storage within the current theme also limits capabilities (for example, data will be lost when the project is copied or when the plugin is reinstalled).

There is another location that knows the most recently saved styles - the server to which the plugin is already capable of writing color schemes. Accordingly, changes can also be read from it. To do this, you need to specify an address in the settings where data (using the same scheme) can be retrieved via a GET request.

Additionally, special headers may be required to execute requests. A separate field -  headers - has been created for them.

After that, when publishing the next version, you will be able to see all changes.

Now all theme data is stored on the server and updated as needed. The next step is to configure retrieval on the client.

Optimizing processes during the development phase

What should happen on the client:

Retrieving styles

To do this, you can use the same URL as for checking changes within the plugin. Accordingly, the client needs to know this URL and the headers required for the request.

As I wrote in the previous article, theming can be configured on the server side or on the client side. In the case of server-side rendering, styles are expected to always be up to date. In this case, sending a request on every request becomes a resource-intensive operation. If theming is applied on the client, then all styles must be fetched during the build phase.

Conversion

The plugin saves all names in a convenient format for creating CSS variables - kebab-case. Thanks to this, all that remains is to retrieve all the names, turn them into variables, and create a class for each theme. Additionally, you need to add color-scheme to the classes so that for dark themes, the browser displays UI elements in a dark interface.

Integration

For client-side theming, the best approach is to embed the themes during the build process. For server-side theming, embedding should occur during page rendering. In both cases, it’s important to optimize the number of requests sent and cache their results.

Themeizer was created to address these challenges.

The “themeizer” package

The package retrieves the necessary settings from environment variables, namely the server URL, headers, and revalidate. Since there can be a large number of requests, the package supports the revalidate option out of the box, which determines the frequency of request sending.

The package supports two options for embedding styles: during compilation, by replacing the meta tag with a style tag containing classes and styles; and a manual mode, used primarily for SSR.

You can read more about the package on its page - https://www.npmjs.com/package/themeizer

The “next-themeizer” package

Additionally, a package was created for Next.js, as one of Next.js’s main goals is to lower the barrier to entry. Minimal hassle, unnecessary logic, and application configuration settings.

You can add the package to your Next.js configuration as follows:

// next.config.js
const withThemeizer = require('next-themeizer').default;
module.exports = withThemeizer();


https://www.npmjs.com/package/next-themeizer

Process Optimization During Implementation

The packages listed above will work perfectly if the site is already built on CSS variables and uses them throughout. However, if the decision to add theming is made after the product has already been created, the process of transitioning to variables can take a significant amount of time.1

The “themeizer-cli” package

Another feature of the ecosystem is simplified implementation. For this, you can use the themeizer-cli package. This is a command-line utility that automatically replaces colors in style sheets with variable names from the desired theme.

Example of use:

npm install themeizer-cli -g
themeizer-cli –c ./themeizer.config.json
// or
themeizer-cli –u <https://server-url.com/themes> -t light


https://www.npmjs.com/package/themeizer-cli

The utility is just getting started and has many improvements ahead of it. One of its main drawbacks at the moment is the inability to control changes. It simply goes through all styles and changes colors wherever a variable can be used.

Change control and color support are also important components of theming that, of course, could not be overlooked.

The “stylelint-themeizer” package

When working with styles and validating them, the main tool is Stylelint (ESLint for styles), which can detect errors and automatically fix them.

Stylelint-themeizer is a plugin that adds a new rule: if this rule is enabled, styles will be automatically checked, and if a color appears in published styles but is not defined as a CSS variable, it will display an error and suggest the variable name. When used with the --fix argument, Stylelint will automatically fix all styles.

www.npmjs.com/package/stylelint-themeizer

Note: Unfortunately, Stylelint still does not have full Sass support (e.g., the existing postcss-sass parser cannot correctly process styles without the “#” prefix).

Conclusion

Themeizer has taken its first steps toward maturity, but there is still a long way to go. Perhaps bringing it into the spotlight at such an early stage is a hasty decision. But without a doubt, it is already capable of much, ready to grow and evolve, and waiting for the opportunity to help all developers make their web worlds more convenient and accessible.

Links:]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreianl5fuuwpkyncutbknwhimxlioexapxpznznudf4ci4c46k3uplq@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Theming. Part 2. New Browser APIs. Theming with SSR. Choosing Between SPA, SSR, and SSG.]]></title><link>https://alexdln.com/blog/themeizer-part-2</link><guid isPermaLink="true">https://alexdln.com/blog/themeizer-part-2</guid><pubDate>Wed, 12 Jan 2022 08:01:00 GMT</pubDate><description><![CDATA[According to data collected by Android Authority (2,514 respondents) and an analysis by Thomas Steiner, over 80% of users use a dark theme. Of course, it’s hard to call this sample entirely representative, since the surveys were conducted on technical forums, but overall, we can say that a good half of the internet uses a dark theme.]]></description><content:encoded><![CDATA[According to data collected by Android Authority (2,514 respondents) and an analysis by Thomas Steiner (243 respondents), over 80% of users use a dark theme. Of course, it’s hard to call this sample entirely representative, since the surveys were conducted on technical forums, but overall, we can say that a good half of the internet uses a dark theme.

Every year, the web takes giant strides toward the bright future (or the dark one, depending on which you prefer). One by one, tools are adding dark themes, and major tech giants are updating and improving their design systems to stay relevant in this expanding dark world. Implementing a dark theme significantly improves the user experience and, as a result, business metrics. For example, Terra, one of Brazil’s largest news companies, recently increased the number of pages viewed per session by 170% and reduced its bounce rate by 60% after adding a dark theme [read the article].

The first part of the series was largely devoted to the history of CSS variables—their creation, development, and evolution - and also included examples of theming at both the planning and design stages as well as during front-end development, covering various methods of theming and theme switching [Theming: History, Reasons, Implementation]. In this article, taking it a step further, we will discuss client-server interaction and the capabilities of modern browsers in the context of theming.

Server-Side Rendering. SSR.

Before diving deep into server-side theming, it’s worth briefly touching on the topic of server-side rendering—what it is and how it works.

The history of server-side rendering began with the creation of PHP by Rasmus Lerdorf in 1994, just one year after the creation of HTML by Sir Timothy John Berners-Lee (also the creator of URI, URL, HTTP, HTML, and the World Wide Web [together with Robert Cailliau]). Despite the fact that PHP developed without a full-fledged specification until 2014, its popularity during those years was extremely high. In 2003, this wave of popularity was greatly boosted by WordPress, which to this day powers 40% of the internet.

In addition to PHP, other languages also attempted to fill the niche of a language for server-side rendering. For example, Java using servlets, or Ruby and its web application development framework Ruby on Rails. But they never managed to achieve any significant market share.

The turbulent era of PHP’s dominance began to wane in 2009 with the emergence of Node.js, and more precisely in 2010, when TJ Golovachuk wrote Express.js.

The next milestone began to take shape from 2010 to 2014, during the emergence of the “big three” - Angular (2010), React (2013), Vue (2014), which laid a solid foundation for a new type of web application - SPAs (single-page applications).

They all shared a common problem - the lack of any SEO optimization. Consequently, an equally significant trio of frameworks was subsequently created for them: Next.js (2014), Nuxt.js (2016), and NestJS (2017). These frameworks allowed applications to be generated on the server, thereby providing search engine crawlers with ready-to-index content.

SSR also has an advantage over SPAs in that:

Server-side rendering has come a long way, just like other website rendering options.

Alternatives to SSR

Single-Page Application (SPA) - the very “big three.” The site is generated into JavaScript files, and all content is rendered on the client.

Static website. The website is generated into static pages, and these files are subsequently served from the server. The client immediately receives the finished page.

A hybrid of these two approaches is also popular, which is present in the previously mentioned Next.js and Nuxt.js out of the box - static site generation (SSG). With this approach, static HTML is served from the server, and a virtual DOM is reconstructed on the client so that the application subsequently runs in SPA mode.

The difference in the context of theming

Since rendering in an SPA occurs on the client—the definition of the theme and the addition of styles for it must reside at the top level of the application. Accordingly, all logic related to theme configuration must be moved into a separate bundle, or rendering must occur only after the theme has been defined. Additional complexities may arise in projects with a single source of truth, since all actions must pass through it, or in projects with styles stored in global objects (e.g., CSS-in-JS allows this object to be used within style functions).

In a static site, the theming logic must also be moved to the top level, but unlike an SPA, it does not need to render the entire page afterward. In both cases, the brief interval during which the user’s theme is determined will be accompanied by a color shift (if the user’s theme does not match the application’s default theme).

SSR, on the other hand, allows the server to render the correct theme immediately (based on the theme stored in cookies), and recently this has even been extended to new users.

Google: Theme Detection via Header

Previously, the only way to determine the user’s theme was to add a client-side script. Last August, with the release of version 93, Google added support for headers that allow the user’s device theme to be passed to the server. (The feature actually worked perfectly well in version 92 as well.) The functionality is based on client hints (dry standard documentation – https://datatracker.ietf.org/doc/html/rfc8942).

They allow the server to request the necessary user data. This data will be added to the request headers.

In the case of a device topic, the following headers must be added to the server's response:

Accept-CH: Sec-CH-Prefers-Color-Scheme
Sec-CH-Prefers-Contrast Vary: Sec-CH-Prefers-Color-Scheme
Critical-CH: Sec-CH-Prefers-Color-Scheme


The following header will be added to the request:

Sec-CH-Prefers-Color-Scheme: "dark"


Basically, it was only after the full implementation of this API that Google finally added a dark theme to Search.

Unfortunately, the hint functionality is only supported by browsers based on the Chromium engine. All other browsers (including Safari and Firefox) do not support hints.

Server-side theme implementation

The principles of class and style definitions, as well as client-side logic, are described in the previous article.

The following examples will be built using the Next.js framework. The exact same logic can be replicated on any other framework. The getServerSideProps function runs on the server and passes the return value to the page as props [more details].

We store the theme selected on the client in cookies.

const changeTheme = (newTheme: Theme) => {
    document.cookie = `theme=${newTheme};path=/;max-age=31536000`;
    // ...
};


First, we check if the user has a saved theme.

const cookieTheme = ctx.req.cookies.theme;


If the user’s theme is not saved, we determine the user’s theme based on the header.

const userDeviceTheme = ctx.req.headers['sec-ch-prefers-color-scheme'] as string;


If no theme is saved or an invalid theme is saved, we return the default theme.

const userDetectedTheme = cookieTheme || userDeviceTheme;
const defaultTheme = 'light';
const theme = (userDetectedTheme === 'light' || userDetectedTheme === 'dark') ? userDetectedTheme : defaultTheme;


We pass the user's theme to the client-side and use it during rendering.

export const getServerSideProps: GetServerSideProps = async (ctx) => {
  const userDeviceTheme = ctx.req.headers['sec-ch-prefers-color-scheme'] as string;
  const cookieTheme = ctx.req.cookies.theme;
  const userDetectedTheme = cookieTheme || userDeviceTheme;
  const defaultTheme = 'light';
  const theme = (userDetectedTheme === 'light' || userDetectedTheme === 'dark') ? userDetectedTheme : defaultTheme;
  return ({
    props: {
      theme,
    },
  });
};


So, what should you choose - SPA, SSR, or SSG?

The choice is simplest with SPA - if you don’t have a server and search engine optimization isn’t a concern, this is exactly what you need. How to set up theming in a standard SPA is described in the previous article.

Between SSG and SSR, the choice depends on the following parameters:

In the case of theming, the second point applies—different pages are rendered depending on the user’s theme. To be fair, this point alone is not sufficient to justify switching to SSR. It all depends on the server’s capabilities, its stability, and other objectives.

Additionally, SSG and SSR allow browsers to index the page correctly. Although Google can already index SPAs, it is still too early to call a single-page application SEO-friendly.

Other important aspects for evaluating a website are its speed and the user experience it provides.

Web Vitals is the standard for performance testing in today’s world. Therefore, let’s compare the metrics it provides for each mode.

The following examples will demonstrate three page rendering options:

All options are identical except for library-specific features (different root element IDs, different approaches to embedding in the head, and various additional capabilities).

The metrics listed below do not accurately reflect the actual performance of specific options. The examples are provided solely to demonstrate relative speed and user experience when interacting with the site in the context of theming.

First, let’s look at the metrics themselves

Now, about the reasons for this difference

In fact, the reports for SSR and SPA should be identical whether the themes match or not.

SPA is fully rendered on the client. Accordingly, it first determines the theme (in a fraction of a second) and only then begins to render the entire client-side portion with the correct theme.

SSG renders the page on the server; then, a virtual tree is built on the client and compared with the actual one. If there are no changes, nothing happens. If there are changes, the client-side is re-rendered.

SSR takes more time to render the page on the server, which increases page load time. On the server, if the theme header is supported, the page is rendered immediately with the appropriate theme.

Unfortunately, the emulator used in PageSpeed Insights does not contain any information about the device’s theme. If it did, the page would load immediately in the light theme. You can check and compare the results yourself; all links will be provided at the end of the article.

This can be seen more clearly in the logs

Conclusion

This section should conclude which option to choose, but it would be presumptuous to provide a definitive answer. Lab tests can show which option is faster or more comfortable for the user (primarily due to the absence of flickering when switching themes), but it is impossible to predict exactly how the application will behave in specific projects. The only thing that can be said with certainty is that if you haven’t yet considered theming and you’re developing a web service, it would be useful to pay attention to this topic.

You can view the full code in the GitHub repository: https://github.com/alexdln/theming
]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreign6i7uq565h7atwdcjwbfpmaeg3ff3gw6nrlmlq6j4k424jsoxsm@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Next.js 12 Release]]></title><link>https://alexdln.com/blog/nextjs-v12</link><guid isPermaLink="true">https://alexdln.com/blog/nextjs-v12</guid><pubDate>Thu, 04 Nov 2021 08:27:00 GMT</pubDate><description><![CDATA[Who would have thought that just four months later, these ideas would resurface and take on even greater significance?]]></description><content:encoded><![CDATA[“The moment we’ve all been waiting for.” But let’s go over all the new features one by one.

Perhaps I’ll start this article by quoting the conclusion of the previous one:

“When version 10 was released, it seemed like we had already implemented everything we could possibly think of. However, Vercel went beyond my expectations and added real-time collaboration.

Hussein concluded his part with the phrase “We love working with great frameworks to help developers make the web faster.” The phrase “make the web faster” became the symbol of this presentation and collaboration with Google. I’m sure Aurora will bring us much more light.

Who would have thought that just four months later, these ideas would resurface and take on even greater significance?

Now, on to the heart of the conference. Yes, this time we didn’t receive an invitation to a new version release, as was the case with Next 11 (though for some reason they also called it a conference), but specifically to a conference.

“Tomorrow will transform your career” - not a bad headline for grabbing attention. Of course, Vercel hasn’t been the kind of company that needs that for a long time. Individual tickets, chat rooms, and virtual halls - all of this creates, albeit an illusion, the feel of a real conference, and quite successfully at that; few online conferences can provide such an immersive experience.

Introduction

Now let’s return to the conference, specifically to October 26 at 4:00 PM UTC. That’s exactly when the conference began, and we were greeted by Vercel CEO Guillermo Rauch, whom we already knew well from previous releases. The phrase “Let’s make the web faster” became more than just a symbol of the previous release; it became the very essence of Next, and Guillermo reminded us of this once again. It’s nice when such things aren’t just words and ideas, but are backed up by actions and releases - which they remind us of: bundle optimization, inline fonts, critical CSS, and special components with built-in optimizations. Features that not only speed up applications but also require no extra effort. And if you forget to use components where they can be useful - such as Image instead of the classic img, Script , or Link - then Conformance will remind you and provide recommendations for optimizing those elements.

All these innovations are largely focused on improving Web Vitals metrics, which have already become fundamental in the world of fast web and have recently become an important factor in search rankings [Vercel article].

Of course, Guillermo couldn’t help but touch on the importance of developer experience. That’s exactly what the next speaker told us about.

Improvements to the developer experience

Lee Robinson, Head of DevRel at Vercel, began with a feature many have long awaited:

ES modules, which are supported by all modern browsers and reduce bundle sizes. Due to this update, the minimum Node.js version has been raised from 12.0.0 to 12.22.0 (the first version to include native support for ES modules);

URL imports. And this isn’t just about importing packages - images, logic, modules, and even components can be imported via URL. It’s clear that microservices have been gaining significant popularity in recent years, and this is an attempt to create yet another alternative to standard approaches. A very interesting alternative with clear potential. For example: importing components via URL from Storybook (something tells me we’ll see this in the near future).

But the development experience consists not only of the components we use when building projects, but also of the infrastructure that comes with the tool. Key components of this infrastructure are the compiler and the debugging process. And while the debugging process has been accelerated with every subsequent release (and this one was no exception—rebuild time during development is now under 100 ms), project build times haven’t been accelerated in a long time (not counting the switch to Webpack 5 in version 10.2).

Therefore, as the next step, Lee Robinson introduced us to integration with a new compiler written in Rust - SWC. By using it, the Vercel team was able to speed up the build process by nearly 2x. Still, a caveat is in order here - of course, a Rust compiler speeds up the build, but such a significant acceleration can only be achieved in exceptional cases. (Moreover, it’s quite odd that different versions of Yarn were used when comparing builds across different compilers - it’s possible that different devices were used for this comparison.)

Of course, the presentation wouldn’t be complete without highlighting the special features available on the Vercel platform.

Development of the Vercel platform

Next, Becca Zandstein showed commercial examples of using Next.js Live, which was introduced with the previous release and is currently still in beta testing. One of the companies that has already tried this technology is monogram.io, which develops websites, including those built with Next.js. They use Next.js Live both for their own website and for their clients, including macstadium.com. It’s easy to verify that these services use this technology - just add _live to the end of the URL: https://www.macstadium.com/_live.

Another important feature of the Vercel platform is the analytics it collects on metrics. It’s a very useful feature, but it provides the full picture too late - after the application has been deployed and is in production. Another feature is integration with third-party tools such as Sentry, Slack, logging services, and many others. Now this list has been expanded with another useful tool called Checkly, which was created specifically to solve the problem described above (receiving metrics only after deployment to production). Now, using this tool, you can configure a performance budget and monitor it during the build phase [Vercel documentation].

Middleware and edge functions

And even with that, the release notes weren’t complete, so the next topic covered was middleware and edge functions. Middleware is a kind of “layer” between a request sent from the client and the rendering of the page on the server. In other words, within this layer, you can perform: rewrites, redirects, update cookies, or stream HTML. This functionality can be used for A/B testing, authentication, geolocation-based redirects, and more. Vercel has prepared a GitHub repository with a large number of examples.

The fruits of collaboration with React and Google Aurora

The final topic discussed was the collaboration with the React and Google Aurora teams and the result of this cooperation:

React server components are an experimental feature that, like many other React experiments, is being actively tested in Next.js before being released for general use in React itself. Another feature that has long been discussed and anticipated in the React community is the ability to generate components entirely on the server, without any client-side code. [Next.js documentation on React 18]

Conclusion

On that wonderful note, we reached the final stage of the presentation, and Guillermo summarized this release. We can confirm the words from the invitation - this is the biggest release yet, and together with the innovations from the previous release, it will win over quite a few people.

Results

The Series C funding ($102 million) secured by the Vercel team to continue building the web of the future, along with support from Google’s “Aurora” team and a large community, made it possible to launch such a major release just four months after the previous, equally exciting release.

To summarize the presentation of the new Next release, here is a list of features that have been completed or moved to open beta:

Useful links

Previous article about the Next.js 11 release;

Next.js Conf;

Next.js 12 release;

Vercel platform;

Vercel Series C round;

Guillermo Rauch, Forbes magazine;

React 18 and React Server Components;

Upgrade guide from previous versions;

Next tutorial.

Instructions for migrating

From Gatsby;

From an app using React Router;

From create-react-app - automatic experimental utility or manually.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreibi7rydwjgaekbn2hw76cgqonw2576vbszvn7qbgsrr7hcf7yxkri@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Next.js: Where is it going, where did it come from, and what does Google have to do with it?]]></title><link>https://alexdln.com/blog/nextjs-v11-aurora</link><guid isPermaLink="true">https://alexdln.com/blog/nextjs-v11-aurora</guid><pubDate>Mon, 28 Jun 2021 07:18:00 GMT</pubDate><description><![CDATA[It’s been almost two weeks since the Next 11 presentation. Today, I'd like to talk about which technologies soon received comprehensive support, understand who helped implement them, and what goals the company was pursuing.]]></description><content:encoded><![CDATA[It’s been almost two weeks since the Next 11 presentation. First, a little about the presentation. It was a website where, strangely enough, colored cursors were visible, and from time to time, text would appear next to them. Later it became clear that I was among other participants of this event. A decent and promising intro. Today, I'd like to talk about which technologies soon received comprehensive support, understand who helped implement them, and what goals the company was pursuing.

A few words about the technology, in case anyone managed to miss it. Next is a full-stack framework for… incremental builds, server-side rendering, static generation, or a hybrid version of React-based applications. It was developed by Zeit, a company that had the Now utility by 2016. Then, in 2020, after a $21 million investment, the company was renamed Vercel, and the Now utility (also renamed Vercel) evolved into a web service with additional functionality for deploying applications, primarily those built on Next.js, but also Nuxt, Gatsby, Angular, and many other popular tools.

Almost immediately after its release, Next.js became one of the most popular JavaScript backend frameworks, trailing Express.js only in a few areas

Now, let’s briefly go back to the beginning. A few words about how it all started.

Work on Next, judging by the first commit, began on October 6, 2016, and as early as October 25, the first article describing the new framework for server-side rendering of React-based applications appeared on the zeit.co blog. Since then, active development and improvement of the framework have been underway.

In 2016, it was a framework capable of server-side rendering for pages in the pages directory (without additional libraries for routing), executing logic on the server and passing the result to the page component, adding tags to the site’s <head />, and offering simple installation and configuration. However, it did not allow for custom configurations for Webpack, Babel, styling exclusively from next/css, and much more (most of these issues were known and already planned for).

We could talk at length about all the changes in Next.js over the years - including the introduction of SSR, SSG, and ISR, the switch to TypeScript, AMP support, faster builds, and much, much more - a truly vast amount of useful functionality and improvements have been implemented. But since this article is specifically about the history of these innovations, let’s return to the present day.

Fortunately, the presentation went off without a hitch, so we were able to enjoy the event. It was kicked off, of course, by the aforementioned Guillermo Rauch, who had finished his coffee. First, we heard about the company’s results, its growing popularity in recent months, its large community, and its collaboration with Google and Facebook, as well as the importance of the developer experience and how Next worked to ensure that all changes occurred quickly while preserving all page states. This led us smoothly to the first new feature.

From practically the very beginning of Vercel’s launch, the Zeit team has placed significant emphasis on this service, adding support for new technologies and new features - analytics, app optimization, SSL, and more. Now another option has been added to this list - the very one featured in that promising intro at the beginning of the article: real-time collaboration. Share, comment on, highlight, and edit code - all directly from your browser window thanks to real-time collaboration. The Vercel team already had experience adding real-time capabilities to Next.js - specifically, when transitioning the development mode from Fetch requests to WebSockets in Next.js 8. For this new feature, Vercel chose not the classic WebSockets approach, but Replicache. The difference is that Replicache doesn’t transmit any information over WebSockets; it simply sends signals to the application indicating that a change has occurred.

It looks amazing, especially how it’s configured for responsive design.

However, it wasn’t without its share of strange glitches [tweet].

Overall, it’s a really cool tool for collaboration. It seems there’s nothing more to add here. Unless video chat is added in the corner, but that would make it an online office rather than a tool for collaborative development.

Now that the coffee had cooled slightly, Guillermo Rauch - circling an image and noting that it was too heavy - wrapped up his talk on online collaboration and handed the floor to the next speaker at the conference - Lydia Halli.

Images really do take up a significant portion of a website, not only in terms of page size but also in terms of its weight. To add optimized images to a website, the <picture /> and <source /> tags were added to HTML, along with the srcset attribute for <source /> and <img />. It was possible to create a separate component, pass different image formats and sizes into it, and receive a <picture /> element with images optimized for all screen sizes. That is, to perform two steps: compress the images and pass them to the component. However, Vercel decided to go a step further and eliminate the first step for us. Starting with version 10, Next.js can automatically compress all images; shortly after, the ability to compress them using any library of your choice was added. Additionally, the component automatically adds width and height attributes. Nevertheless, even optimized images can negatively impact metrics if they appear on the first screen. Therefore, the next innovation Lydia shared with us was a placeholder for images.

Just recently, I ran into this problem - the first image was ruining the LCP. The potential solutions were: preloading the image, loading it with high priority (which essentially means implementing lazy loading for everything else, including fonts), and loading it with a placeholder. All options were tested, and no clear improvement was noticeable in any of the approaches. It will be interesting to see the test results for Next’s solution. In any case, this solution is fully automated and will be easy to try out.

It’s worth pausing here to note how much Next values the developer experience. This thread runs from the very origins of the framework - ease of development and implementation, clear documentation, examples, and a smooth transition to new versions

The only thing that can be said about the images is that they lack out-of-the-box conversion to modern formats (WebP, JPEG 2000, JPEG XR, AVIF). Unfortunately, it’s difficult to call this feature complete just yet, but its development is certainly encouraging.

At the end of her presentation, Lydia thanked the community and Google contributors - Alex Castle and Joon Park. Google began actively participating in Next’s development in 2019, leading up to the release of Next 9. The goal of the collaboration was to improve performance and optimize built applications. The results were not long in coming, and by version 9.1.7, the results of this collaboration were already evident - a 3–8% reduction in client bundle size. In version 9.2, Alex Castle - whom we’re already familiar with - developed a new code fragmentation method that reduced the size of Barnebys’ applications by 23% (read more). Such collaborations not only help improve existing code but also assist with implementing features tailored to new services and metrics. For example, in May 2020, Google introduced a new approach to metrics - Web Vitals. Literally the very next week, the “reportWebVitals” method was added to Next.js for pages, allowing these metrics to be tracked and sent to the server or to Google Analytics. And the story doesn’t end there, because on June 15 of this year, Google introduced the “Aurora” project, aimed at collaborating with and developing open-source products. Among others, the aforementioned Alex Castle joined this team.

Now Guillermo hands the floor to Shubhie Panicker, one of the participants in the Aurora project. Google was at the forefront of modern approaches to application design, and now they are eager to share their experience and resources to make the web better together. Shubhie shares that the best approach to improving application performance is to shift as much of the optimization work as possible to frameworks, since this issue isn’t as critical for developers. One could argue with this, but the idea certainly makes sense: by abstracting away standard optimization tasks (such as image compression and font optimization), developers can tackle more complex issues, work on new ideas for improving performance, or focus on the project itself.

The first thing the "Aurora" project worked on was conformance - a tool representing a new approach to handling application errors. Essentially, it combines ESLint, TypeScript checks, and Next.js compiler errors, as well as additional rules related in one way or another to Web Vitals - such as how scripts, images, and links are added. I can’t wait to see the new way of displaying errors in Next [read more].

Overall, error handling in Next.js is a story of its own, full of changes and reworks in the pursuit of perfection. It all began with the very creation of Next.js. The first redesign of error display came as early as version 3

Then, in the fourth version, they were changed again, but the description of the error’s location was incomplete, so the next change wasn’t long in coming and appeared in the fifth version

Surprisingly, the sixth version saw no changes. However, the Vercel team realized that this approach wasn’t actually very convenient for modifications and redesigned the error display again in version 7, combining elements from the third and fifth versions.

It was a decent option and lasted all the way until Next 9.4, by the time of which it became clear that full-screen error display wasn’t all that convenient for the community

It was convenient, but still not ideal. We’d like to believe that the new option will be added soon and will stick around for a long time, because it looks much more convenient than what was done before. However, this feature is still not finished and hasn’t even been included in an official release.

Then, the floor was given to another Aurora participant - Houssein Djirdeh. First, Houssein ran the conformance check command - next lint - and we were able to see the issues affecting performance

From the error messages and Houssein’s remarks, we are reminded of the importance of third-party scripts and styles for real-world business. However, while Next.js supported CSS (after Next 9.2) and Sass (after Next 9.3) with imports and modules out of the box, there were no tools for scripts. Therefore, the next topic of discussion was the <Script /> tag. An important feature of this tag is that we can not only add external scripts to the page but also specify exactly when they should load: beforeInteractive (before all app bundles are loaded), afterInteractive (after the app is hydrated), and lazyOnload (after the browser’s onload event).

Houssein then told us that 80% of websites use custom fonts. To be honest, I thought that percentage would be higher, but that’s beside the point. Often, these fonts are loaded from Google Fonts. Google has already stated that installing fonts this way is a bad idea. But people continue to use this method because it had clear advantages - it was easy to add, and once loaded, the fonts could be accessed from the cache for any site. This was possible until 2020, when Google introduced a separate cache, thereby making it unique for each site and eliminating one of the advantages. Now, Houssein presented a new approach to font optimization - inlining during the compilation of fonts loaded from Google Fonts and Adobe TypeKit - which was another useful and truly necessary innovation.

This was the last feature presented to us in the talk, but there were other interesting points as well, and here’s a brief overview of them:

Next.js 11 will use createRoot when React moves to version 18 alpha. It’s nice to see how Next.js collaborates with other tools. For example, support for Babel 7 was introduced while it was still in beta testing, and the same was true for Webpack 4 and Webpack 5. Webpack 5, by the way, has been enabled by default since Next.js 10.2 (starting with Next.js 9.5, you could enable the option when Webpack was at version 5.0.0-beta.30).

The command npx @next/codemod cra-to-next is used to update an application created with CRA into a Next.js application .It was a truly interesting presentation; I heard exactly what I wanted to hear. Next has a rich history, a large community, and steady growth and development. I’m especially pleased with the involvement of the Google team, who contributed many ideas to this update and are helping to implement them.

With the release of version 10, it seemed like they had already implemented everything one could possibly think of. However, Vercel went beyond my expectations and added real-time collaboration. The new error display format (despite constant revisions) came as a surprise this time. I can’t wait to try it out, but Next hasn’t mentioned anything about it yet, and so far the discussion has only been about updating the linter for specific rules, which is also extremely useful.

Houssein concluded his part with the phrase, “We love working with great frameworks to help developers make the web faster.” The phrase “make the web faster” became the symbol of this presentation and the collaboration with Google. And I’m sure Aurora will bring us much more light.]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreicp7lr52dc6fgsfpk2rcv7zqmwbarshel5x6noptnycqrpcxlzmyi@png" type="image/jpeg" /></item>
		<item><title><![CDATA[Thematization: History, Causes, and Implementation]]></title><link>https://alexdln.com/blog/themeizer-part-1</link><guid isPermaLink="true">https://alexdln.com/blog/themeizer-part-1</guid><pubDate>Fri, 18 Jun 2021 07:05:00 GMT</pubDate><description><![CDATA[A dark theme for nighttime use isn’t the only reason for adding theming to a website. Another important goal is service accessibility. Worldwide, there are 285 million people with total or partial vision loss; in Russia, there are 218,000, and up to 2.2 billion with various visual impairments]]></description><content:encoded><![CDATA[Introduction. Reasons for its emergence

When the web was just getting started, its sole purpose was to host content (hypertext pages) so that users could access it via the World Wide Web. Back then, design wasn’t even a consideration - after all, why would scientific publications need design? Would that make them any more useful? (The first website). Times have changed, and today the World Wide Web is far from limited to scientific publications. There are blogs, services, social networks, and much, much more. Every website needs its own unique identity; it must engage and attract users. Even scientific websites are gradually realizing this, since most scientists want not only to study various aspects but also to communicate them to the public, thereby increasing their own popularity and the value of their research (for example, 15 out of 15 scientific websites on the list have undergone a redesign in the last 6 years). Ordinary people aren’t interested in a dull website with unclear content. Science is becoming more accessible, and websites are transforming into apps with user-friendly and pleasant interfaces.

Since “convenience” means something different to everyone - there is no clear definition or specific rules for creating a service that is convenient for everyone. In recent years, the concept of “thematization” has become associated with this idea. That is exactly what I want to discuss in this article.

I think this is an extremely useful improvement. Its emergence was inevitable, and it’s even strange that it became popular so late. Both businesses and developers design their products to be accessible to everyone, everywhere, and at all times. That’s why, for example, modern interfaces now feature “shades of yellow” and “shades of blue” options. However, not everyone is satisfied with these features, so services also consider usability and the well-being of their valued users.

A dark theme for nighttime use isn’t the only reason for adding theming to a website. Another important goal is service accessibility. Worldwide, there are 285 million people with total or partial vision loss; in Russia, there are 218,000 [source], and up to 2.2 billion with various visual impairments [source]. Nearly a third of children in Russia “graduate from school wearing glasses” [source]. The statistics are staggering. However, most people are not completely blind but have only minor visual impairments. These can be color-related or quality-related. While accessibility for quality-related impairments is achieved by adding support for different font sizes, for color-related impairments, adding a theme is the ideal solution.

History of Development. An Endless Journey

Every website corresponds to a specific color scheme. First and foremost, this concept is associated with a dark theme, and that is where we should start. In the past, when the color scheme was limited to shades of black and white, implementing a dark theme simply required inverting those shades. It’s a shame that people don’t think about such things when they’re easy to implement, but only when their implementation becomes a challenge in itself. Now, with the deepening of ties to design and the growth of user demands, the color scheme of every website is becoming unique, sometimes unexpected, and truly impressive. In this regard, designers are capable of working wonders, “combining the incompatible.” Unfortunately, such designs only work within their current color scheme; inverting their shades results in a completely awful design. Therefore, when designing modern projects that include multiple themes, color palettes - light, dark, pink, etc. - are created in advance. Later, during the design process, all shades are selected from this palette.

Design is just one component of every website. During development, a website passes through the hands of many people - developers, analysts, testers, and marketers. Theming affects each of them in one way or another. And, first and foremost, of course, the developers.

Adding theming to a project can be an extremely simple task if it’s addressed during the project planning stages. Although it has only become popular in recent years, the technology itself is by no means new. This process, like many others, has been refined and actively developed year after year over the past 5–10 years. Today, it’s hard to even imagine how the pioneers did it. You had to change the classes of all elements, optimize this through color inheritance, and update almost the entire DOM. And all this during the era of a monster like IE - a source of nightmares for seasoned developers - and before the advent of ES6. Now, however, all these problems are a thing of the past for developers. Many incredibly difficult processes are gradually fading into history, leaving future generations of developers with memories of those terrible times and excellent solutions, many of which have been perfected.

JS is one of the most dynamically evolving programming languages, but it is far from the only one evolving on the web. New capabilities are being added and old problems are being resolved in technologies such as HTML and CSS. This, of course, would be impossible without updates to the browsers themselves. The development and widespread adoption of modern browsers have lifted a heavy burden from programmers’ shoulders. These technologies don’t stop there, and I’m confident that in years to come, people will look back on them the same way programmers now look back on IE. All these updates not only simplify development and make it more convenient but also add a range of new capabilities. One such capability is CSS variables, which first appeared in browsers in 2015. 2015 turned out to be a landmark year for the web in many ways - it saw historically significant JS updates, the adoption of the HTTP/2 standard, the emergence of WebAssembly, the rise of minimalism in design, and the arrival of ReactJS. These and many other innovations are aimed at speeding up websites, simplifying development, and improving the user experience.

A brief history of CSS variables:

The earliest mention of variables I was able to find dates back to 2012. In April of that year, a description of a new concept for CSS-variables - appeared in the documentation.

A very interesting method for creating and using variables was described:

:root {
  var-header-color: #06c;
}
h1 { background-color: var(header-color); }


However, before this functionality appeared in browsers, a significant amount of time had to be spent on planning and debugging. Thus, support for CSS variables was first added to Firefox only in 2015. Then, in 2016, Google and Safari followed suit.

The final implementation differed from the original idea and looked like this, which is now familiar to us:

:root {
--header-color: #06c;
}
h1 {background-color: var(--header-color); }


This method of defining variables was first described in the documentation in 2014. That same year, a description of the default value—the second argument of this function - was introduced.

It is also interesting to trace the goals behind the addition of variables. Judging by the first versions of the specification, these were the ability to eliminate duplicate constants and improve opportunities for developing adaptability. In 2015, an example of using variables for internationalization appeared. There was practically no mention of theming in that distant and important year for frontend development—the trend toward theming has only emerged in the last few years.

Variables unlocked great potential not only in theming but also in design flexibility overall. Previously, to update a color - such as that of a cancel button - you had to go through all the files, find all the classes that signified cancellation, and update their colors. Now, however, the standard practice is to create a variable, and all elements associated with the cancel action use that variable directly as their color. If a designer ever decides that this button should now be scarlet instead of red, they can confidently propose it without fear of being stoned for a minor update, despite its complexity.

In parallel with the CSS specification, its pre- and post-processors have also evolved. Their development was significantly faster, as they did not need to define the specification and promote it across all browsers. One of the first pre-processors was Stylus, created way back in 2011; later, Sass and Less were created. They offer a range of advantages and capabilities because all complex functions and modifications are converted to CSS during compilation. One such capability is variables. But these are entirely different variables, more similar to JavaScript than to CSS. Combined with mixins and JavaScript, it became possible to customize themes.

It has been 10 years since the preprocessor first appeared - a massive span of time by web standards. Many changes and additions have taken place: HTML5, ES6, 7, 8, 9, 10. JavaScript has acquired a whole range of libraries, building an ecosystem of unimaginable scale around itself. Some of these libraries have become the standard of the modern web - React, Vue, and Angular - replacing the HTML developers were accustomed to with their own JS-based alternatives. JS is also replacing CSS, giving rise to such a remarkable technology as “CSS in JS,” which offers the same capabilities but in a more dynamic and familiar format (sometimes at a high cost, but that’s a whole other story). JS has taken over the web and is now setting out to conquer the entire world.

The modern world needs these features, and since that’s the case, designers need to know how to design them, and developers need to know how to implement them. As already described, there are quite a few ways to implement them. There are just as many nuances and potential issues that can arise during the development of this capability.

Design Planning

As mentioned earlier, it’s much better if the idea of adding themes comes up at the very start of the project. You can lay the foundation right away and continue with it in mind. After all, this is clearly easier than laying the foundation after the house is built. Although, to be fair, it’s worth noting that if the house was designed as modular, with expansion and relocation in mind, then this will be possible without additional effort.

Since a theme is an interface element, designers will take on part of the planning work. Approaches to developing design systems are constantly evolving. In the past, website design was created in programs like Photoshop (though there are still some individuals who do this today, driving developers to the brink of despair). These programs had a host of drawbacks, especially in the days of slow computers and clients with grand ideas. Of course, these programs aren’t going away; they’ll continue to be used for their primary purpose - photo editing and illustration. Their role is being taken over by modern alternatives designed primarily for the web - Avocode, Zeplin, Figma, Sketch. It’s convenient when the main tools a programmer uses are specifically designed for development purposes. In such cases, the evolution of tools keeps pace with the evolution of the fields they serve. These tools are excellent proof of that. When they first appeared, you could copy CSS styles, create grids, and check margins and padding - not with rectangles or even a ruler, but simply with a mouse movement. Then variables appeared, followed by the component-based approach entering the web world, and this approach was incorporated into these tools. They keep up with trends by creating various utilities, adding toolkits, and don’t stop there - miraculously keeping pace with this machine that has accelerated to incredible speeds.

One of the main advantages of the component-based approach is reusability. The same element can be inserted anywhere and then changed all at once with a simple gesture. But this is notable not only for copying in its original form, but also with minor modifications. One such modification could be color.

Theming can extend beyond the website page. One such opportunity is the color of the status bar and tabs in some mobile browsers. It’s also worth considering color schemes for these elements.

Color Palette

When reviewing the design of a new project, you often notice a strange but very popular way of naming colors - blue200. Of course, we can thank the designer for this, as it is indeed a valid approach, albeit for different purposes. This method works well if developers use atomic CSS, which has become the most interesting and user-friendly approach for developers in recent years, though it still lags significantly behind BEM in terms of adoption [source]. However, neither this method of naming variables nor atomic CSS is suitable for websites designed with theming in mind. There are many reasons for this; one of them is that blue200 is always a light blue color, and in order for all light blue buttons to become dark blue, you would need to change the color of all buttons toblue800. A much better option would be to name the color primary-color, because this name could be either blue200 or blue800, but it would be clear to everyone involved in development that this variable represents the site’s primary color.

colors: {
  body: '#ECEFF1',
  antiBody: '#263238',
  shared: {
    primary: '#1565C0',
    secondary: '#EF6C00',
    error: '#C62828',
    default: '#9E9E9E',
    disabled: '#E0E0E0',
  },
},


For text, you can use a scheme similar to the buttons (primary, secondary, default, for disabled elements), or you can use color levels:

colors: {
  ...
  text: {
    lvl1: '#263238',
    lvl3: '#546E7A',
    lvl5: '#78909C',
    lvl7: '#B0BEC5',
    lvl9: '#ECEFF1',
  },
},


That is, the primary color, the color for second-level text, and so on. When switching to a dark theme, these levels are inverted, and the interface will look just as good.

Examples of variable names:

shared-primary-color,

text-lvl1-color.

Of course, this method of naming variables cannot be absolutely universal, but it (with minor adjustments) will work for most cases.

Now that we’ve covered design planning in the context of development, we can move on to the next stage.

Code design.

As mentioned earlier, at the code level, there are three main approaches to theme design: using native variables (with or without preprocessors), using “CSS in JS,” and replacing style sheets. Each solution can ultimately be reduced to native variables in one way or another, but the problem is that IE does not support them. Below, we’ll describe two approaches to theme design: using variables in native CSS and using “CSS in JS.”

Key steps in website theming:

The third step is universal for any theme customization option. Therefore, let’s briefly cover it first.

The manifest is a file used primarily for PWAs. However, its contents serve as an alternative to meta tags and load faster than they do. For theming, we are interested in keys such as “theme_color” and “background_color.” Meta tags with these parameters can also be added to the page’s head section.

theme_color - the site’s theme color. The specified color will be used as the tab and status bar color on Android mobile devices. Browser support for this feature is extremely limited, but these browsers account for 67% of the market.

background_color - the color that will be loaded before the stylesheet is loaded. Support for this attribute is even more limited than for the theme color:

Variables

It’s worth starting the description of this option with support, as this is perhaps its only drawback.

The complete lack of support in IE, and the long-standing lack of support in popular browsers and Safari, are not critical issues, but they are noticeable, even if they mainly affect users who are unwilling to update their browsers and devices. However, IE is still in use and is even more popular than Safari (5.87% vs. 3.62% as of 2020).

Now, regarding the implementation of this method.

The method for naming variables is described in the "Design Planning" section.

The theme classes should store all variables used for theming.

.theme-light {
	--body-color: #ECEFF1;
	--antiBody-color: #263238;
	--shared-primary-color: #1565C0;
	--shared-secondary-color: #EF6C00;
	--shared-error-color: #C62828;
	--shared-default-color: #9E9E9E;
	--shared-disabled-color: #E0E0E0;
	--text-lvl1-color: #263238;
	--text-lvl3-color: #546E7A;
	--text-lvl5-color: #78909C;
	--text-lvl7-color: #B0BEC5;
	--text-lvl9-color: #ECEFF1;
}

.theme-dark {
	--body-color: #263238;
	--antiBody-color: #ECEFF1;
	--shared-primary-color: #90CAF9;
	--shared-secondary-color: #FFE0B2;
	--shared-error-color: #FFCDD2;
	--shared-default-color: #BDBDBD;
	--shared-disabled-color: #616161;
	--text-lvl1-color: #ECEFF1;
	--text-lvl3-color: #B0BEC5;
	--text-lvl5-color: #78909C;
	--text-lvl7-color: #546E7A;
	--text-lvl9-color: #263238;
}


You must decide which theme will be used by default and add its class to the body tag.

If the main goal of your site’s theming is to add a dark theme, you should pay close attention to this point. Proper configuration will provide users with an extremely pleasant experience and won’t strain their eyes if they visit your site late at night in search of valuable content.

There are at least two correct approaches to solving this problem

2.1) Setting the default theme within CSS

Add a new class that is set by default: .theme-auto

Variables are added to this class based on the device’s theme using media queries:

@media (prefers-color-scheme: dark) {
	body.theme-auto {
		--background-color: #111;
		--text-color: #f3f3f3;
	}
}
@media (prefers-color-scheme: light) {
	body.theme-auto {
		--background-color: #f3f3f3;
		--text-color: #111;
	}
}


Pros of this method:

Cons:

2.2) Setting the default class using JavaScript

JS has a useful feature: tracking and checking CSS rules. One such rule, as described above, is the user’s device theme.

To check the theme and add the class for the desired theme, add the following code:

if (window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches) {
	body.classlist.add('theme-dark')
} else {
	body.classlist.add('theme-light')
}


Additionally, you can subscribe to device theme changes:

window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', e => {
if (e.matches) {
        body.classlist.remove('theme-light')
        body.classlist.add('theme-dark')
    }else {
        body.classlist.remove('theme-dark')
        body.classlist.add('theme-light')
    }
});


Pros:

Cons:

./button.css

.button {
	color: var(--text-lvl1-color);
	background: var(--shared-default-color);
	...
	&:disabled {
		background: var(--shared-disabled-color);
	}
}
.button-primary {
	background: var(--shared-primary-color);
}
.button-secondary {
	background: var(--shared-secondary-color)
}


./appbar.css

.appbar {
	display: flex;
	align-items: center;
	padding: 8px 0;
	color: var(--text-lvl9-color);
	background-color: var(--shared-primary-color);
}


This is probably the simplest step. You need to add a listener to the button that will:

body.classlist.remove('theme-light', 'theme-high')


body.classlist.add('theme-dark')


The theme can be saved either in cookies or in local storage. The structure will be the same in both cases: theme: 'light' | 'dark' | 'rose'

At the top level of the site, you need to add logic to retrieve the saved theme and add the appropriate class to the body tag. For example, in the case of local storage:

const savedTheme = localStorage.getItem('theme')
if (['light', 'dark', 'rose'].includes(savedTheme)) {
	body.classlist.remove('theme-light', 'theme-dark', 'theme-rose')
	body.classList.add(`theme-${savedTheme}`)
}


That is, if the saved theme is one of the configured themes, we remove the classes added by default and add the class with the saved theme.

CSS-in-JS

This option is best suited for client-side applications.

As an example, we’ll look at the combination of React + styled-components + TypeScript.

The method for naming variables is described in the “Design Planning” section.

The theme objects must store all variables used for theming.

You must decide which theme will be used by default and pass the appropriate object to the Provider.

./App.tsx

import { useState } from 'react'
import { ThemeProvider } from 'styled-components'
import themes from './theme'

const App = () => {
	const [theme, setTheme] = useState<'light' | 'dark'>('light')
	const onChangeTheme = (newTheme: 'light' | 'dark') =>  {
		setTheme(newTheme)
	}
	return (
		<ThemeProvider theme={themes[theme]}>
			// ...
		</ThemeProvide>
	)
}


If the main goal of your site’s theming is to add a dark theme, you should pay attention to this point. Proper configuration will provide users with an extremely pleasant experience and won’t strain their eyes if they visit your site late at night in search of valuable content.

To do this, you can set the default theme at the top level of the app:

useEffect(() => {
  if (window.matchMedia?.('(prefers-color-scheme: dark)').matches) {
    onChangeTheme('dark')
  }
}, [])


Additionally, you can subscribe to device theme changes:

useEffect(() => {
  window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', (e) => {
    if (e.matches) {
      onChangeTheme('dark')
    } else {
      onChangeTheme('light')
    }
  })
}, [])


./src/components/atoms/Button/index.tsx

import type { ButtonHTMLAttributes } from 'react'
import styled from 'styled-components'

interface StyledProps extends ButtonHTMLAttributes<HTMLButtonElement> {
  fullWidth?: boolean;
  color?: 'primary' | 'secondary' | 'default'
}

const Button = styled.button<StyledProps>(({ fullWidth, color = 'default', theme }) => `
  color: ${theme.colors.text.lvl9};
  width: ${fullWidth ? '100%' : 'fit-content'};
  ...
  &:not(:disabled) {
    background: ${theme.colors.shared[color]};
    cursor: pointer;
    &:hover {
      opacity: 0.8;
    }
  }
  &:disabled {
    background: ${theme.colors.shared.disabled};
  }
`)

export interface Props extends StyledProps {
  loading?: boolean;
}

export default Button



./src/components/atoms/AppBar/index.tsx

import styled from 'styled-components'

const AppBar = styled.header(({ theme }) => `
  display: flex;
  align-items: center;
  padding: 8px 0;
  color: ${theme.colors.text.lvl9};
  background-color: ${theme.colors.shared.primary};
`)

export default AppBar


The name of the current theme is changed via the Context API or Redux/MobX

./App.tsx

import { useState } from 'react'
import { ThemeProvider } from 'styled-components'
import themes from './theme'

const App = () => {
  const [theme, setTheme] = useState<'light' | 'dark'>('light')
  const onChangeTheme = (newTheme: 'light' | 'dark') => {
    setTheme(newTheme)
  }
  return (
    <ThemeProvider theme={themes[theme]}>
      <ThemeContext.Provider value={{ theme, onChangeTheme }}>
        ...
      </ThemeContext.Provider>
    </ThemeProvide>
  )
}


./src/components/molecules/Header/index.tsx

import { useContext } from 'react'
import Grid from '../../atoms/Grid'
import Container from '../../atoms/Conrainer'
import Button from '../../atoms/Button'
import AppBar from '../../atoms/AppBar'
import ThemeContext from '../../../contexts/ThemeContext'

const Header: React.FC = () => {
  const { theme, onChangeTheme } = useContext(ThemeContext)
  return (
    <AppBar>
      <Container>
        <Grid container alignItems="center" justify="space-between" gap={1}>
          <h1>
            Themization
          </h1>
          <Button color="secondary" onClick={() => onChangeTheme(theme === 'light' ? 'dark' : 'light')}>
            set theme
          </Button>
        </Grid>
      </Container>
    </AppBar>
  )
}

exportdefault Header



The theme can be saved either in cookies or in local storage. The structure will be the same in both cases: theme: 'light' | 'dark' | 'rose'

At the top level of the site, you need to add logic to retrieve the saved theme and update the current theme value. For example, in the case of local storage:

./App.tsx

...

function App() {
  const [theme, setTheme] = useState<'light' | 'dark'>('light')
  const onChangeTheme = (newTheme: 'light' | 'dark') => {
    localStorage.setItem('theme', newTheme)
    setTheme(newTheme)
  }
  useEffect(() => {
    const savedTheme = localStorage?.getItem('theme')as 'light' | 'dark' | null
    if (savedTheme && Object.keys(themes).includes(savedTheme))  setTheme(savedTheme)
    else if (window.matchMedia?.('(prefers-color-scheme: dark)').matches) {
      onChangeTheme('dark')
    }
  }, [])
  useEffect(() => {
    window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', (e) => {
      if (e.matches) {
        onChangeTheme('dark')
      } else {
        onChangeTheme('light')
      }
    })
  }, [])
  return (
    ...
  )
}


Conclusion

There are many ways to implement theming—from creating files with all styles for each theme and switching them as needed to CSS-in-JS solutions (using native CSS variables or library-based solutions). The browser API allows you to customize the service for each specific user by reading and tracking their device’s theme.

Theming is gaining momentum, with major companies one after another implementing it into their services. A well-designed dark theme improves the user experience, reduces eye strain, saves battery life, and provides the much-desired ability to customize the service to one’s preferences. There are many benefits at a low cost, especially if everything is planned in advance.

Of course, not everyone needs theming. In any case, it involves some complications, albeit minor ones. It is needed, for example, for apps and web services.

Google and Apple services, banks, social networks, editors, GitHub, and GitLab. The list goes on and on, even though this is just the beginning of the technology’s development—and what lies ahead is bigger, better, and simpler.

Part Two - New Browser APIs. Theming with SSR. Choosing Between SPA, SSR, and SSG Part Three - Themeizer - A Young Companion to Styles ]]></content:encoded><enclosure url="https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:er6erflnnxcozlbqmrpflt6h/bafkreigpdlbir26qspiyp2fmi2r3ipktd7nqptvfmzqteoxa6tr2mk5nhe@png" type="image/jpeg" /></item>
	</channel>
</rss>