<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Nico Burniske - Thoughts</title>
        <link>https://nicoburniske.com/thoughts/</link>
        <description>Thoughts on software, life, and everything in between.</description>
        <language>en-us</language>
        <item>
            <title>Forging Leptos Query</title>
            <link>https://nicoburniske.com/thoughts/forging_leptos_query</link>
            <description><![CDATA[A robust asynchronous state management library for Leptos.]]></description>
            <category>rust</category>
            <category>leptos</category>
            <category>react</category>
            <category>typescript</category>
            <pubDate>Sat, 05 Aug 2023 00:00:00 +0000</pubDate>
            <content:encoded><![CDATA[<p>Since falling down the Rust rabbithole, I've grown fond of <a href="https://github.com/leptos-rs/leptos">Leptos</a>, a bleeding edge full-stack framework for building fast web apps in Rust. In my time using React, my go-to async state management library has been <a href="https://tanstack.com/query/latest">Tanstack Query</a>, and the Leptos ecosystem had no equivalent. So I decided to build <a href="https://github.com/nicoburniske/leptos_query">Leptos Query</a>.</p>
&lt;p align=&quot;center&quot;&gt;
    &lt;a href=&quot;https://github.com/nicoburniske/leptos_query&quot;&gt;
        &lt;img src=&quot;https://raw.githubusercontent.com/nicoburniske/leptos_query/main/logo.svg&quot; alt=&quot;Leptos Query Logo&quot; width=&quot;300&quot; height=&quot;300&quot; class=&quot;bg-base-100&quot;/&gt;
    &lt;/a&gt;
&lt;/p&gt;
<h2>The Dark Age of React</h2>
<hr />
<p>Tanstack Query (TSQ) is a library that will always have a soft spot in my heart. A framework agnostic tool that has the perfect level of abstraction to solve one problem - async state within a synchronous user interface.</p>
<p>Before TSQ, many engineers struggled with the client needing to accurately maintain a complex global state. Constant challenges included caching previous responses to avoid loading states, data being out of date, or changes made by another user not being reflected until a page refresh occurred. API responses were also commonly cached in Redux, you had to keep track of the request's execution, handle error states and retries, and more. It was a mess.</p>
<p><strong>Redux Flashbacks</strong>
<img src="/image/blog/PTSD_chihuahua.jpg" alt="Redux Flashbacks" /></p>
<p>But what exactly is the 'client state'? TSQ shifted the landscape, revealing that many things considered 'client state' were actually 'server state.' In many cases, the true state comes from a server's oracle of truth, not a faulty client-side state machine. This simplifies the problem, because now when in doubt you can just ask the server and use the response to render something in a UI.</p>
<p>This realization led to TSQ's revolutionary concept: <strong>The Query</strong>. A query represents the state of an asynchronous process that yields a result, which is bound to a unique key. It includes features such as SWR caching with background refetching, loading and error states, retries, request deduplication, refetch intervals, invalidation, and more. These are all common properties of asynchronous processes, and TSQ provides a powerful abstraction to manage them. Instead of trying to manage all of the complexity yourself by manually making an API request inside of a <code>useEffect</code> hook, you can just use TSQ's declarative <code>useQuery</code> hook.</p>
<p>Here are some of the benefits of using a Query Manager like Tanstack Query.</p>
<h3>TSQ: Query Caching</h3>
<hr />
<p>Every query has a unique key to identify it. This key is used to store the query in the cache, and to retrieve it when needed. Given queries are bound to a unique key, TSQ will also automatically deduplicate requests. This means that if a query is already in flight, the system will not make another request, but instead wait for the existing request to complete. This is a powerful feature that ensures the client is not making unnecessary requests to the server.</p>
<p>One of the powerful features of TSQ is its ability to employ a configurable &quot;Stale While Revalidate&quot; (SWR) strategy for query caching. This approach drastically simplified data fetching, user experience, and ensures the client is always up to date with the latest data.</p>
<h4>What is Stale While Revalidate (SWR)?</h4>
<p>SWR is a cache invalidation strategy that allows the client to use stale, or slightly outdated, data while simultaneously fetching the latest data from the server. This approach provides an immediate response using cached data, followed by a seamless update once the fresh data is retrieved.</p>
<h4>Benenfits of SWR</h4>
<ul>
<li><strong>Caching</strong>: When a query is executed, the result is stored in the cache using it's unique key.</li>
<li><strong>Background Refetch</strong>: If the cached data is considered stale, the system starts fetching the latest data from the server in the background, ensuring that the information displayed to the user is updated as soon as the fresh data is available.</li>
<li><strong>Less Loading States</strong>: When a user requests data that's been previously fetched, the cached (possibly stale) data is immediately displayed. This ensures a responsive user experience, especially on subsequent page loads.</li>
<li><strong>Seamless Transition</strong>: Once the latest data is fetched, the UI automatically updates, replacing the stale data with the latest fresh data.</li>
</ul>
<h3>TSQ: The Best of Dynamic Typing</h3>
<p>TSQ makes beautiful tradeoffs between type safety and ergonomics, leveraging a mix of static and dynamic typing made possible by Typescript.</p>
<p>Each Query has an associated key. In TSQ, all keys are <strong>arrays</strong>.</p>
<p>The Query Cache is just a JavaScript object, where the keys are serialized Query Keys and the values are your Query Data plus meta information.</p>
<p>So in Typescript terms we can use the following type to represent a Query Cache. Keep in mind these are drastically simplified for the sake of this article.</p>
<pre><code class="language-typescript">type QueryCache = Map&lt;any[], Query&gt;

type Query  = {
    data: any,
    // Meta information ...
}
</code></pre>
<p>Let's look at how this would look for an example query in React + Typescript, where we have a query that shows if a user likes a song or not.</p>
<pre><code class="language-typescript">// Query for a Track's Like Status.
const useTrackLikeQuery = (trackId: string): UseQueryResult&lt;boolean, unknown&gt; =&gt; {
   return useQuery({
      queryKey: trackLikeQueryKey(trackId),
      queryFn: () =&gt; getTrackLike(trackId),
   })
}

// Query Key.
const trackLikeQueryKey = (trackId: string): string[] =&gt; ['TrackLike', trackId]

// Query Fetcher.
const getTrackLike = async (trackId: string): Promise&lt;boolean&gt; =&gt; {
   //…
}
</code></pre>
<p>The Query <code>useTrackLikeQuery</code> returns a UseQueryResult where the data being fetched is of type boolean, and the error type is unknown (one of the tragedies of Javascript).</p>
<p>We can see that the query key is an array of strings, where the first element is a label 'TrackLike' to differentiate this category of queries (e.g. Track likes in the Query Cache), and the second element is the trackId. This query key function guarantees that every query will have a unique slot in the cache.</p>
<p>It's important to note that the Type-safety is inferred from the invocation of <code>useQuery</code> in <code>useTrackLikeQuery</code>. The cache itself, has no notion of the type of data that's being stored.</p>
<h3>TSQ: A Common Footgun</h3>
<p>If you somehow manage to have non-unique query keys, you can have multiple query value types for the same key, and this can lead to runtime errors.</p>
<pre><code class="language-typescript">// Query for a Track.
const useTrackQueryConflict = (trackId: string): UseQueryResult&lt;Track, unknown&gt; =&gt; {
   return useQuery({
      // Duplicate Query Key!
      queryKey: trackLikeQueryKey(trackId),
      queryFn: () =&gt; getTrack(trackId),
   })
}

type Track = {
    trackId: string,
    trackName: string,
    // ...
}

// Query Fetcher.
const getTrack = async (trackId: string): Promise&lt;Track&gt; =&gt; {
   //…
}
</code></pre>
<p>Note how we are using the same Query Key function <code>trackLikeQueryKey</code> for both <code>useTrackLikeQuery</code> and <code>useTrackQueryConflict</code>. This is a problem. If  we are simultanously using <code>useTrackLikeQuery</code> and <code>useTrackQueryConflict</code> with the same trackId in our React App, we will very likely have a runtime error. This is because in one place we are expecting a boolean, and in another place we are expecting a <code>Track</code> object.</p>
<p>I want to emphasize that once you are aware of this footgun, this is <strong>NOT</strong> common in practice. It's easy to avoid by ensuring that you make your query keys unique. But it helps you understand the dynamicism of TSQ, and how it's leveraged to make the library so ergonomic.</p>
<h2>Porting Tanstack Query to Rust</h2>
<hr />
<p>Now that we've covered how TSQ works, how can we implement an Async Query manager in Leptos, a Rust Web Framework?</p>
<p>The task is non-trivial, given how different Rust and JavaScript are. Rust is a compiled language known for its expressive type system with Algebraic Data Types and Traits, granular memory control, powerful macros, and concurrent programming capabilities. Meanwhile, JavaScript's interpreted, Just-in-time compiled, and dynamically-typed nature offers a simpler, practical approach to development, though it is often unsafe.</p>
<p>It's worth mentioning that TSQ's core implementation is framework agnostic and it provides integration wrappers for React, SolidJS, Vue, and Svelte. I don't have any such constraint, and can leverage Leptos' reactivity directly.</p>
<h3>Comparing Leptos and React</h3>
<ol>
<li>
<p>Rendering and Markup: Both Leptos and React employ declarative rendering with JSX/RSX markup languages.</p>
</li>
<li>
<p>Virtual DOM vs. Reactivity: React uses a virtual DOM, follows specific rules for hooks to guarantee re-render stability. In contrast, Leptos champions fine-grained reactivity using Signals.</p>
</li>
<li>
<p>Full-Stack Development: Leptos is designed to be full-stack and isomorphic, targeting WebAssembly and supporting server-side rendering. React is primarily a front-end library, though frameworks transform it into a full-stack solution. That said, having the Leptos' backend run in Rust makes SSR magnitudes faster and more efficient than any FullStack JS Framework.</p>
</li>
<li>
<p>Maturity: React has been a standard in the JavaScript community since 2013, and Leptos is just a year old.</p>
</li>
</ol>
<h3>Dynamic Typing in Rust</h3>
<p>Rust's type system is robust, but what if we want some of the flexibility of dynamic typing like in TSQ? Is there a way to have the best of both worlds?</p>
<p>Actually, yes! We can look into the <a href="https://doc.rust-lang.org/std/any/"><code>std::any</code></a> module, which has some neat tools for type reflection. One challenge for implementing a Query Manager in Rust is handling a lot of dynamic entries in one cache, each needing a unique Key and Value combination.</p>
<p>So I came up with a solution: the 'AnyMap' data structure. It's the backbone of Leptos Query, blending Rust's strong typing with the adaptability needed for today's web apps.</p>
<pre><code class="language-rust">type AnyMap = HashMap&lt;TypeKey, Box&lt;dyn Any&gt;&gt;;

type TypeKey = (TypeId, TypeId);

struct CacheEntry&lt;K, V&gt;(HashMap&lt;K, Query&lt;K, V&gt;&gt;);
</code></pre>
<p>The outer Map is indexed by a <code>TypeKey</code>, which is a tuple of two <code>TypeId</code>s. The first <code>TypeId</code> is the type of the Query Key, and the second <code>TypeId</code> is the type of the Query Value.</p>
<p>This guarantees that we will <strong>always</strong> get the correct type of data from the cache, which is a huge win for safety. This approach also lets you use the same key for different value types, which is extremely convenient.</p>
<p>The next thing to notice, is the <code>Box&lt;dyn Any&gt;</code>. This is the magic that lets us store any type of data in the cache. The value is actually of type <code>CacheEntry&lt;K,V&gt;</code>, but we use <code>Box&lt;dyn Any&gt;</code> to store multiple instances of it in the same cache.</p>
<p>When we have a <code>Box&lt;dyn Any&gt;</code>, we can use the <code>downcast</code> functions to get the inner value. This is a runtime operation, but it's safe because we know the type of the inner value. Though there is a cost to runtime reflection and dynamic dispatch associated with <code>Box&lt;dyn Any&gt;</code> and <code>downcast</code>ing, the developer ergonomics + safety + efficiency of caching, far outweighs the cost.</p>
<p>Here's the function at the core of the Query Client, showing how we extract the typed inner Map from the cache.</p>
<pre><code class="language-rust">/// The Cache Client to store query data.
/// Exposes utility functions to manage queries.
pub struct QueryClient {
    pub(crate) cx: Scope,
    pub(crate) cache: Rc&lt;RefCell&lt;AnyMap&gt;&gt;,
}

impl QueryClient {

    /// Utility function to find or create a cache entry for the &lt;K,V&gt; combination, and then apply the function to it.
    fn use_or_insert_cache&lt;K, V, R&gt;(
        &amp;self,
        // Function to apply to the cache entry.
        func: impl FnOnce((Scope, &amp;mut HashMap&lt;K, Query&lt;K, V&gt;&gt;)) -&gt; R + 'static,
    ) -&gt; R
    where
        K: 'static,
        V: 'static,
    {
        // borrow the AnyMap!
        let mut cache = self.cache.borrow_mut();

        // Create the TypeKey.
        let type_key: TypeKey = (TypeId::of::&lt;K&gt;(), TypeId::of::&lt;V&gt;());

        // Find or create the cache entry.
        let cache: &amp;mut Box&lt;dyn Any&gt; = match cache.entry(type_key) {
            Entry::Occupied(o) =&gt; o.into_mut(),
            Entry::Vacant(v) =&gt; {
                let wrapped: CacheEntry&lt;K, V&gt; = CacheEntry(HashMap::new());
                v.insert(Box::new(wrapped))
            }
        };

        // Downcast the cache entry to the correct type.
        let cache: &amp;mut CacheEntry&lt;K, V&gt; = cache
            .downcast_mut::&lt;CacheEntry&lt;K, V&gt;&gt;()
            .expect(
                &quot;Error: Query Cache Type Mismatch. This should not happen. Please file a bug report.&quot;,
            );

        // Call the function with the cache entry.
        func((self.cx, &amp;mut cache.0))
    }
}
</code></pre>
<h3>Leptos Resource - Primitive for Async Tasks</h3>
<p>Leptos provides a <a href="https://leptos-rs.github.io/leptos/async/10_resources.html">Resource</a> primitive to integrate async tasks into the synchronous reactive system.</p>
<p>Resources integrate with <a href="https://leptos-rs.github.io/leptos/async/11_suspense.html">Suspense</a> and <a href="https://leptos-rs.github.io/leptos/async/12_transition.html">Transition</a> components to simplifiy the loading process and work with server side rendering. Reading the resource from within the <code>&lt;Suspense/&gt;</code> registers that resource with the <code>&lt;Suspense/&gt;</code>, and the fallback will be displayed until the resource is resolved.</p>
<p>Here's a Todo Example using the Resource primitive.</p>
<p>Let's define the following endpoint to get a Todo by ID.</p>
<pre><code class="language-rust">use leptos::*;
use serde::*;

#[derive(Serialize, Deserialize, Clone)]
struct Todo {
    id: String,
    content: String,
}

// Don't do this in a real app! Just for demo purposes.
#[cfg(feature = &quot;ssr&quot;)]
static GLOBAL_TODOS: RwLock&lt;Vec&lt;Todo&gt;&gt; = RwLock::new(vec![]);

type TodoResponse = Result&lt;Option&lt;Todo&gt;, ServerFnError&gt;;

#[server(GetTodo, &quot;/api&quot;)]
async fn get_todo(id: u32) -&gt; Result&lt;Option&lt;Todo&gt;, ServerFnError&gt; {
    // Mimic a delay.
    tokio::time::sleep(Duration::from_millis(1000)).await;
    let todos = GLOBAL_TODOS.read().unwrap();
    Ok(todos.iter().find(|t| t.id == id).cloned())
}
</code></pre>
<p>Now let's use the endpoint in a component. This component will fetch a Todo from the server, and display it using a <code>Resource</code>. If the Todo is not found, it will display &quot;Not Found&quot;.</p>
<pre><code class="language-rust">#[component]
fn TodoWithResource(cx: Scope) -&gt; impl IntoView {
    let (todo_id, set_todo_id) = create_signal(cx, 0_u32);

    let todo_resource: Resource&lt;u32, TodoResponse&gt; = create_resource(cx, todo_id, get_todo);

    view! { cx,
        &lt;div&gt;
            &lt;Suspense fallback=move || {
                view! { cx, &lt;p&gt;&quot;Loading...&quot;&lt;/p&gt; }
            }&gt;
                {move || {
                    todo_resource
                        .read(cx)
                        .map(|response| {
                            match response.ok().flatten() {
                                Some(todo) =&gt; todo.content,
                                None =&gt; &quot;Not found&quot;.into(),
                            }
                        })
                }}
            &lt;/Suspense&gt;
        &lt;/div&gt;
    }
}
</code></pre>
<h3>If we have Resources, why do we need Queries?</h3>
<p>Resources don't provide any caching natively. Meaning every time we mount a component, such as our <code>&lt;TodoWithResource/&gt;</code>, we will make a network request to fetch the data.</p>
<p>If you want to have caching, you have to manually lift the resource into a higher scope (closer to base of component tree). And every time the key changes, the resource will be re-fetched, so there's no caching per key, only per resource.</p>
<p>This involves a lot of unnecessary boilerplate, and becomes very tedious if you have many resources.</p>
<p>Here's a simple example:</p>
<pre><code class="language-rust">// Root component for our Leptos Appo
#[component]
fn App(cx: Scope) -&gt; impl IntoView {
    let (todo_id, set_todo_id) = create_signal(cx, 0_u32);
    // Store the resource in a higher scope's context.
    let todo: Resource&lt;u32, TodoResponse&gt; = create_resource(cx, todo_id, get_todo);
    provide_context(cx, todo):

    view!{cx,
        &lt;TodoComponent/&gt;
    }
}

#[component]
fn TodoComponent(cx: Scope) -&gt; impl IntoView {
    let todo_resource: Resource&lt;u32, TodoResponse&gt; = use_context(cx).expect(&quot;No Todo Resource Found!&quot;);

    view! {cx,
        &lt;div&gt;
            &lt;Suspense fallback=move || {
                view! { cx, &lt;p&gt;&quot;Loading...&quot;&lt;/p&gt; }
            }&gt;
                {move || {
                        todo_resource
                        .read(cx)
                        .map(|response| {
                            match response.ok().flatten() {
                                Some(todo) =&gt; todo.content,
                                None =&gt; &quot;Not found&quot;.into(),
                            }
                        })
                }}
            &lt;/Suspense&gt;
        &lt;/div&gt;
    }
}

</code></pre>
<h2>Leptos Query</h2>
<hr />
<p>Leptos Query uses Resources internally to be compatible with SSR and Suspense, provides a simpler API, SWR caching and many other niceties out of the box.</p>
<p>Here's an example. We are storing a CacheEntry of <code>&lt;u32, TodoResponse&gt;</code> in the QueryClient's cache.</p>
<p>Given the response will be stored in the cache on a key <code>u32</code> basis (the <code>todo_id</code>), any subsequent loads for a specific todo will not involved any foreground loading, and will be served from the cache. If the query is considered stale, the query will be re-fetched in the background, and the UI will be updated with the new reponse after it finalizes. Stale time is configurable using <code>QueryOptions</code>.</p>
<pre><code class="language-rust">use leptos_query::*;

#[component]
fn TodoComponentWithQuery(cx: Scope) -&gt; impl IntoView {
    let (todo_id, set_todo_id) = create_signal(cx, 0_u32);

    let QueryResult { data, .. } = leptos_query::use_query(cx, todo_id, get_todo, QueryOptions::default());

    view! {cx,
        &lt;Suspense
            fallback=move || view! { cx, &lt;p&gt;&quot;Loading...&quot;&lt;/p&gt; }
        &gt;
            &lt;h2&gt;&quot;Todo&quot;&lt;/h2&gt;
            {move || {
                data.get()
                    .map(|a| {
                        match a.ok().flatten() {
                            Some(todo) =&gt; todo.content,
                            None =&gt; &quot;Not found&quot;.into(),
                        }
                    })
            }}
        &lt;/Suspense&gt;

    }
}
</code></pre>
<h3>QueryClient: Interacting with Query Cache directly</h3>
<p>The QueryClient lets you interact with the query cache to invalidate queries, observe queries, and make optimistic updates.</p>
<p>Let's beef up our Todo Example a bit.</p>
<ol>
<li>We will add an endpoint and component to load all the todos.</li>
<li>Add a form to create a new todo.</li>
<li>Add an input to load a specific todo by id.</li>
</ol>
<p>Starting with the server endpoints.</p>
<pre><code class="language-rust">// Get all todos
#[server(GetTodos, &quot;/api&quot;)]
pub async fn get_todos() -&gt; Result&lt;Vec&lt;Todo&gt;, ServerFnError&gt; {
    tokio::time::sleep(Duration::from_millis(1000)).await;
    let todos = GLOBAL_TODOS.read().unwrap();
    Ok(todos.clone())
}

// Add a todo.
#[server(AddTodo, &quot;/api&quot;)]
pub async fn add_todo(content: String) -&gt; Result&lt;Todo, ServerFnError&gt; {
    let mut todos = GLOBAL_TODOS.write().unwrap();

    let new_id = todos.last().map(|t| t.id + 1).unwrap_or(0);

    let new_todo = Todo {
        id: new_id as u32,
        content,
    };

    todos.push(new_todo.clone());

    Ok(new_todo)
}
</code></pre>
<p>Now let's make a component to load all the todos.</p>
<pre><code class="language-rust">#[component]
fn AllTodos(cx: Scope) -&gt; impl IntoView {
    let QueryResult { data, .. } = use_query(
        cx,
        || (),
        |_| async move { get_todos().await.unwrap_or_default() },
        QueryOptions::default(),
    );

    let todos: Signal&lt;Vec&lt;Todo&gt;&gt; = Signal::derive(cx, move || data.get().unwrap_or_default());

    view! { cx,
        &lt;h2&gt;&quot;All Todos&quot;&lt;/h2&gt;
        &lt;Suspense fallback=move || {
            view! { cx, &lt;p&gt;&quot;Loading...&quot;&lt;/p&gt; }
        }&gt;
            &lt;ul&gt;
                &lt;Show
                    when=move || !todos.get().is_empty()
                    fallback=|cx| {
                        view! { cx, &lt;p&gt;&quot;No todos&quot;&lt;/p&gt; }
                    }
                &gt;
                    &lt;For
                        each=todos
                        key=|todo| todo.id
                        view=move |cx, todo| {
                            view! { cx,
                                &lt;li&gt;
                                    &lt;span&gt;{todo.id}&lt;/span&gt;
                                    &lt;span&gt;&quot; &quot;&lt;/span&gt;
                                    &lt;span&gt;{todo.content}&lt;/span&gt;
                                &lt;/li&gt;
                            }
                        }
                    /&gt;
                &lt;/Show&gt;
            &lt;/ul&gt;
        &lt;/Suspense&gt;
    }
}
</code></pre>
<p>And another component for creating a Todo. Note how we're watching the response of the <code>add_todo</code> action. When the response is successful, we invalidate the query cache for the <code>TodoResponse</code> and <code>Vec&lt;Todo&gt;</code> queries. This will cause any active queries to immediately refetch in the background, updating the cache and the UI.</p>
<pre><code class="language-rust">#[component]
fn AddTodo(cx: Scope) -&gt; impl IntoView {
    let add_todo = create_server_action::&lt;AddTodo&gt;(cx);

    let response = add_todo.value();

    let client = use_query_client(cx);

    create_effect(cx, move |_| {
        // If action is successful.
        if let Some(Ok(todo)) = response.get() {
            let id = todo.id;
            // Invalidate individual TodoResponse.
            client.clone().invalidate_query::&lt;u32, TodoResponse&gt;(id);

            // Invalidate AllTodos.
            client.clone().invalidate_query::&lt;(), Vec&lt;Todo&gt;&gt;(());
        }
    });

    view! { cx,
        &lt;ActionForm action=add_todo&gt;
            &lt;label&gt;&quot;Add a Todo &quot; &lt;input type=&quot;text&quot; name=&quot;content&quot;/&gt;&lt;/label&gt;
            &lt;input type=&quot;submit&quot; autocomplete=&quot;off&quot; value=&quot;Add&quot;/&gt;
        &lt;/ActionForm&gt;
    }
}
</code></pre>
<p>Here's a demo.</p>
<p>Note how two request are initiatied as soon as a Todo is created. One for the <code>TodoResponse</code> and one for the <code>Vec&lt;Todo&gt;</code>. Each of those responses then take a second to complete, and then you get the updated query.</p>
<h4>Todo Invalidation Demo</h4>
&lt;video class=&quot;my-8 shadow-md max-w-2xl w-full object-cover min-h-[24rem] max-h-fit mx-auto&quot; controls&gt;
  &lt;source src=&quot;/image/blog/invalidate_todos_on_create.mov&quot; type=&quot;video/mp4&quot;&gt;
&lt;/video&gt;
<p>If you really want the maximum speed, you can perform an optimistic update like this, which will immediately update the entry in the cache, and then refetch in the background (confirming the change with the server).</p>
<pre><code class="language-rust">    create_effect(cx, move |_| {
        // If action is successful.
        if let Some(Ok(todo)) = response.get() {
            let id = todo.id;
            // Invalidate individual TodoResponse.
            client.clone().invalidate_query::&lt;u32, TodoResponse&gt;(id);

            // Invalidate AllTodos.
            client.clone().invalidate_query::&lt;(), Vec&lt;Todo&gt;&gt;(());

            // Optimistic update.
            let as_response = Ok(Some(todo));
            client.set_query_data::&lt;u32, TodoResponse&gt;(id, move |_| Some(as_response));
        }
    });
</code></pre>
<h4>Optimistic Update Demo</h4>
&lt;video class=&quot;my-8 shadow-md max-w-2xl w-full object-cover min-h-[24rem] max-h-fit mx-auto&quot; controls&gt;
  &lt;source src=&quot;/image/blog/find_missing_sock.mov&quot; type=&quot;video/mp4&quot;&gt;
&lt;/video&gt;
<p>It's imporatant to recognize how much legwork you'd have to do to get this behavior without a library like Leptos Query. You'd have to manually manage the cache, and refetch queries.</p>
<p>If you're curious and want to play around with it more, just checkout the <a href="https://github.com/nicoburniske/leptos_query/tree/main/example/start-axum">example project</a></p>
<h3>Invalidating Multiple Queries</h3>
<p>You can invalidate groups of related queries by using <a href="https://docs.rs/leptos_query/latest/leptos_query/struct.QueryClient.html#method.invalidate_query_type"><code>QueryClient::invalidate_query_type</code></a>.</p>
<pre><code class="language-rust">let client = use_query_client(cx);
// Invalidates all queries of type `TodoResponse`, where key is `u32`.
client.invalidate_query_type::&lt;u32, TodoResponse&gt;();

// The queries below will be invalidated.
use_query(cx, || 1, get_todo, QueryOptions::default());
use_query(cx, || 2, get_todo, QueryOptions::default());

</code></pre>
<p>And you can also invalidate every query in the cache using <a href="https://docs.rs/leptos_query/latest/leptos_query/struct.QueryClient.html#method.invalidate_all_queries"><code>QueryClient::invalidate_all_queries</code></a>.</p>
<pre><code class="language-rust">let client = use_query_client(cx);

client.invalidate_all_queries();
</code></pre>
<p>This mimics the behavior of the <a href="https://tanstack.com/query/v4/docs/react/guides/query-invalidation"><code>invalidateQueries</code></a> method in TSQ. It would use the label of the first entry in the Key Array.</p>
<pre><code class="language-typescript">let client = useQueryClient();

// Invalidate every query in the cache.
client.invalidateQueries()
// Invalidate every query with a key that starts with `todo`
client.invalidateQueries({ queryKey: ['todo'] })
</code></pre>
<h3>Thanks for Reading</h3>
<p>Leptos Query is a powerful addition to the Leptos framework, providing a sleek way to manage asynchronous queries. By handling complexities like configurable SWR, background refetching, and query invalidation, it offers a streamlined developer experience that leans on the safety of strong typing and the flexibility of dynamic typing.</p>]]></content:encoded>
        </item>
        <item>
            <title>Mustache Sporting</title>
            <link>https://nicoburniske.com/thoughts/mustache</link>
            <description><![CDATA[I swear we're not just hipsters with mullets wearing trucker hats...]]></description>
            <category>personal</category>
            <category>humor</category>
            <category>existential</category>
            <pubDate>Sat, 10 Jun 2023 00:00:00 +0000</pubDate>
            <content:encoded><![CDATA[<p>A mustache—a sartorial time machine stitched above the lip—commands attention before words ever could.</p>
<p>As I first sported the mustache, hesitation seeped in. Can I pull it off? Do I really look my best? Donning a mustache goes beyond aesthetics. It required embracing this audacious accessory, adjusting to its weight, and confronting the shift in my reflection.</p>
<p>A mustache is your silent spokesman. The world sees it before they see you. It precedes you into rooms, forging first impressions before a single word is said. Every strand a mute syllable, composing and projecting an implicit language that is distinctively yours.</p>
<p>A mustache is a cry for revolution complying with an office chair. In the stark world of sleek corporate professionalism, where conformity is often a requisite, a mustache dares to deviate. This unassuming sliver of hair carries the weight of an unspoken oath — a conduit to subvert the status quo.</p>
<p>A mustache is a bold statement perched on the brink of tomorrow. It lives at the mercy of your razor. This transient nature of the mustache, so subject to the whims of your desire, calls for an ironclad pledge. It is a delicate dance with the fleeting nature of existence, day after day.</p>
<p>But above all, a mustache is an absurdity. A dash of comedy inked onto the face. How many of history’s stars, from Einstein to Mercury to Nietzsche, have walked an exaggerated life through the underlined humor of a mustache? Even in its seriousness, it beckons laughter. Therein lies its paradoxical charm, the seduction of the mustache.</p>
<p>It wields power to instigate change, yet never takes itself too seriously.</p>
<p><img src="/image/showcase/sinker_of_ships.jpg" alt="Mustache Sporter, and a Sinker of Ships" /></p>]]></content:encoded>
        </item>
        <item>
            <title>I need some WD-40</title>
            <link>https://nicoburniske.com/thoughts/wd-40</link>
            <description><![CDATA[From door hinges to my Twitter feed, seems like Rust is all the rage these days.]]></description>
            <category>rust</category>
            <category>leptos</category>
            <category>scala</category>
            <pubDate>Tue, 04 Jul 2023 00:00:00 +0000</pubDate>
            <content:encoded><![CDATA[<p>Around 2 weeks ago I started learning Rust. I attended an intensive 3-day <a href="https://twitter.com/jdegoes">John De Goes</a> Rust workshop, and it left me itching to learn more. With most of my experience coming in the form of backend programming on the JVM, functional programming with Scala, and webdev in Typescript, this was my first real experience with a Systems level programming language.</p>
<p>And well it's gone by quickly. I looked at my NextJS personal website and felt compelled to &quot;Rewrite it in Rust&quot;, which you're now visiting.</p>
<p>And so I got my hands dirty with the <a href="https://github.com/leptos-rs/leptos">Leptos Web Framework</a>, which we'll get into later.</p>
<p>There's a lot cover so let's get into it.</p>
<p><img src="/image/blog/rustacean.jpg" alt="Rustacean evolution" /></p>
<h2>What I Like About Rust (so far...)</h2>
<ul>
<li><a href="#expression-based-thinking">Expression Based Thinking</a></li>
<li><a href="#zero-cost-abstractions">Zero Cost Abstractions</a></li>
<li><a href="#mutation-in-the-type-system">Mutation in the Type System</a></li>
<li><a href="#errors-as-values">Errors as Values</a></li>
<li><a href="#i-wanna-go-fast">I wanna go fast</a></li>
</ul>
<h3>Expression based thinking</h3>
<hr />
<h4>Understanding Expressions Vs. Statements</h4>
<p>Diving into the mechanics of Rust, one of the first things you'll notice is its strong emphasis on expressions rather than statements, a clear distinction from many other languages. But what exactly does that mean, and why should you care?</p>
<p>Expressions are best likened to LEGO blocks. Each piece, or in this case, a chunk of code, has its own unique value. Just as LEGO blocks come together to create a spaceship, castle, or whatever your imagination desires, expressions combine and interact to form more complex values.</p>
<p>Conversely, statements could be seen as the action of placing a LEGO block in a specific spot. It's a crucial action to build your model, but it doesn't constitute a structure on its own. In code, statements perform an action like assigning a value or printing a message, but they don't yield a value in themselves.</p>
<h4>Expression Orientation in Java Vs. Rust</h4>
<p>To put this into perspective, let's take a look at Java. In Java, 'if' statements are, well, just statements. Here's an example:</p>
<pre><code class="language-java">public void statements() {
    int x = null;
    if (true) {
        x = 3;
    } else {
        x = 10;
    }
    System.out.println(x);
}
</code></pre>
<p>In this code snippet, the 'if' statement modifies the state of a variable based on a condition but does not produce any value in and of itself. The value is then printed to the console, which also doesn't produce any value.</p>
<p>Now, let's pivot to Rust, an <a href="https://doc.rust-lang.org/reference/statements-and-expressions.html">expression-oriented</a> language. Most constructs in Rust - excluding declarations - such as blocks, ifs, matches, loops, and functions, are <strong>expressions</strong>.</p>
<p><strong>If expressions</strong> behave like ternary operators in other languages.
Meaning, they evalate to the value of the executed branch.</p>
<pre><code class="language-rust">let x: i32 = if true { 3 } else { 10 };
</code></pre>
<p><strong>Block expressions</strong> establish new scopes and evaluate to the last expression in the block. This lets you create clear demarcations without the need for helper functions:</p>
<pre><code class="language-rust">let x = 0;
let y: i32 = {
    // This variable 'x' shadows and takes precedence over the outer 'x'.
    let x = 3;
    x + 1
};
assert_eq!(y, 4);

</code></pre>
<p>If the last expression in a block ends with a semicolon, it will evaluate to Unit (()):</p>
<pre><code class="language-rust">let unit_block: () = {
    let number = 0;
    println!(&quot;The number is {}&quot;, number);
    number;
};
</code></pre>
<p><strong>Loop expressions</strong> yield the value they 'break' with.</p>
<pre><code class="language-rust">let z: i32 = {
    let mut i = 0;
    loop {
        i += 1;
        if i == 10 {
            break 42;
        }
    }
};
</code></pre>
<p><strong>Functions</strong> evaluate to a resultant value. If the <code>return</code> keyword is ommitted - the last expression in a function is the return value.</p>
<pre><code class="language-rust">
// Implicit return.
fn add_one(x: i32) -&gt; i32 {
    x + 1
}
</code></pre>
<p>Here's a function that combines many different type of expressions to calculate the value of a wallet:</p>
<pre><code class="language-rust">enum Coin {
    Penny,
    Nickel,
    Dime,
    Quarter,
}

enum Bill {
    Washington,
    Jefferson { year: u32 },
    Lincoln,
    Hamilton,
    Jackson,
    Benjy,
}

struct Wallet {
    coins: Vec&lt;Coin&gt;,
    bills: Vec&lt;Bill&gt;,
}

// Return value in of wallet in cents.
fn wallet_value(wallet: &amp;Wallet) -&gt; u32 {
    let cents = {
        let mut cents = 0;
        for coin in wallet.coins.iter() {
            cents += match coin {
                Coin::Penny =&gt; 1,
                Coin::Nickel =&gt; 5,
                Coin::Dime =&gt; 10,
                Coin::Quarter =&gt; 25,
            }
        }
        cents
    };

    let dollars: u32 = wallet
        .bills
        .iter()
        .map(|bill| match bill {
            Bill::Washington =&gt; 1,
            Bill::Jefferson { year } =&gt; {
                // This is a rare bill!
                if *year &lt; 1900 {
                    100
                } else {
                    2
                }
            }
            Bill::Lincoln =&gt; 5,
            Bill::Hamilton =&gt; 10,
            Bill::Jackson =&gt; 20,
            Bill::Benjy =&gt; 100,
        })
        .sum();

    cents + dollars * 100
}


</code></pre>
<p>See how composable these expressions are? <code>wallet_value</code> combines blocks, matches, an if expression, an iterator sum, and a for loop, to calculate the value. You can &quot;follow&quot; each branch neatly to see the sequential flow of the program.</p>
<p>Rust's emphasis on expressions over statements is akin to valuing the whole LEGO model over the single step of placing a block. It's this philosophy that contributes to Rust's expressiveness and ergonomics, making it a powerful tool for developers.</p>
<h3>Zero Cost Abstractions</h3>
<hr />
<p>In Rust, most abstractions come with no runtime costs regarding execution speed or memory usage.</p>
<p>One shining example of this is the Iterator trait. Rust allows you to chain together multiple iterator methods to perform intricate transformations on data. Even with such high-level abstraction, the resultant code often matches, or even outperforms, the efficiency of manually written, low-level code.</p>
<pre><code class="language-rust">let squares_of_evens: Vec&lt;i32&gt; = {
    (1..)
        .map(|x| x * x)
        .filter(|&amp;x| x % 2 == 0)
        .take(10)
        .collect()
};
</code></pre>
<p>Even with its high-level nature, this code matches the performance of a manually written loop</p>
<pre><code class="language-rust">let mut squares_of_evens = Vec::new();
for i in 1.. {
    let square = i * i;
    if square % 2 == 0 {
        squares_of_evens.push(square);
        if squares_of_evens.len() == 10 {
            break;
        }
    }
}
</code></pre>
<h4>&quot;Virtual-Free&quot; Rust</h4>
<p>A typical object-oriented programming concept is the &quot;virtual table&quot; (or vtable), a mechanism employed to support dynamic dispatch. It's how languages like JavaScript, Python, Java, and Scala handle method calls, deciding at runtime which specific version of a method to execute based on the object's actual type.</p>
<p>In contrast, Rust embraces a more efficient approach, sidestepping the need for vtables. It does this through <strong>static dispatch</strong>, resolving method calls at compile time, instead of waiting until runtime. This leads to faster, more memory-efficient code, as we're not paying the runtime cost of dynamically dispatching method calls.</p>
<p>Take the concept of &quot;virtual methods&quot; in Java using interfaces, for example. These are methods declared in an interface and subsequently implemented by any class using this interface. For instance, if you have an interface, Animal, with a method sound(), and classes Dog and Cat implementing this interface, you are using virtual methods.</p>
<pre><code class="language-java">public interface Animal {
    void sound();
}

public class Dog implements Animal {
    @Override
    public void sound() {
        System.out.println(&quot;Woof!&quot;);
    }
}

public class Cat implements Animal {
    @Override
    public void sound() {
        System.out.println(&quot;Meow!&quot;);
    }
}

public class Main {
    public static void main(String[] args) {
        Animal myDog = new Dog();
        Animal myCat = new Cat();

        myDog.sound(); // Prints &quot;Woof!&quot;
        myCat.sound(); // Prints &quot;Meow!&quot;
    }
}
</code></pre>
<p>When you call sound() on an <code>Animal</code> instance, the JVM uses a vtable to determine which version of the method to execute, based on the actual type of the object.</p>
<p>A vtable is essentially an array created by the JVM in memory for each class implementing an interface. Each array entry is a pointer to a method callable by an object of the class. The JVM uses this array to look up method addresses at runtime.</p>
<p>The vtable for our <code>Animal</code> interface might look something like this:</p>
<table>
<thead>
<tr>
<th>Object</th>
<th>sound() pointer</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dog</td>
<td>address of <code>Dog.sound()</code></td>
</tr>
<tr>
<td>Cat</td>
<td>address of <code>Cat.sound()</code></td>
</tr>
</tbody>
</table>
<p>So, when we create a Dog object and call <code>myDog.sound()</code>, the JVM does the following:</p>
<ol>
<li>It accesses the Dog object's vtable in memory.</li>
<li>It locates the sound() entry in the vtable and retrieves the corresponding pointer.</li>
<li>It uses this pointer to navigate to the memory address where the Dog.sound() method is stored.</li>
<li>Finally, it executes the method.</li>
</ol>
<p>This process involves dereferencing pointers and introduces a runtime cost due to the extra steps needed to decide which method to execute.</p>
<p>In contrast, Rust sidesteps this process. It accomplishes polymorphism through the use of traits and type parameters, thus avoiding the need for a vtable and the accompanying dynamic dispatch. This concept is a cornerstone of what's known as &quot;zero-cost abstractions&quot; in Rust - writing high-level, readable code without the performance penalties commonly associated with such abstractions in other languages.</p>
<h3>Mutation in the Type System</h3>
<hr />
<p>Rust's type system embraces mutation as a first-class citizen.</p>
<p>For one, you have to declare variables as mutable with the <code>mut</code> keyword. This code would yield a helpful error, showing an illegal attempt to mutate a <code>Vec</code> that has not been declared as mutable.</p>
<pre><code class="language-rust">let list = vec![1, 2, 3];
list.push(4);
</code></pre>
<pre><code class="language-bash">error[E0596]: cannot borrow `list` as mutable, as it is not declared as mutable
--&gt; &lt;SOURCE POSITION&gt;
|
52 |     list.push(4);
|     ^^^^^^^^^^^^ cannot borrow as mutable
|
help: consider changing this to be mutable
|
51 |     let mut list = vec![1, 2, 3];
|         +++
</code></pre>
<p>Now when it comes to variables, For some type T you have:</p>
<ul>
<li><strong>T</strong>:
<em>You are the owner of the data.</em></li>
<li><strong>&amp;mut T</strong>:
<em>You have EXCLUSIVE write access to the data.</em></li>
<li><strong>&amp; T</strong>:
<em>You have SHARED read access to the data.</em></li>
</ul>
<p>Consider a simple function in Rust that increments a count:</p>
<pre><code class="language-rust"> fn increment_count(count: &amp;mut i32) {
     let value = *count;
     *count = value + 1;
 }

 #[test]
 fn test_inc_count() {
     let mut count = 0;
     let count_ref = &amp;mut count;
     increment_count(count_ref);
     assert_eq!(count, 1)
 }
</code></pre>
<p>In this example, the <code>increment_count</code> function takes a mutable reference to an integer (&amp;mut i32). The mutable reference denotes that count can be modified within the function.</p>
<p>Rust forces you to indicate that a reference is mutable when you declare it. This applies to <code>struct</code>s, <code>enum</code>s, and function parameters. This explicitness provides a lot of safety, and forced documentation.</p>
<p>Fine control over mutability is something that Rust shares with functional languages like Scala. We can draw parallels to a similar function in Scala using a <a href="https://zio.dev/reference/concurrency/ref/">ZIO Ref</a>, a concurrent mutable reference:</p>
<pre><code class="language-scala"> object TestSpec extends ZIOSpecDefault:
     def incrementCount(countRef: Ref[Int]): UIO[Unit] =
         for
             value &lt;- countRef.get
             _     &lt;- countRef.set(value + 1)
         yield ()

     def spec = suite(&quot;incrementCount&quot;) {
         test(&quot;incrementCount should increment the value of the ref by 1&quot;) {
             for
                 ref   &lt;- Ref.make(0)
                 _     &lt;- incrementCount(ref)
                 value &lt;- ref.get
             yield assertTrue(value == 1)
         }
     }

</code></pre>
<p>In the Scala example, <code>incrementCount</code> is an atomic operation using a <a href="https://zio.dev/reference/concurrency/ref/">ZIO Ref</a>. A Ref allows for safe mutation in a concurrent context. Similiarly to the <code>mut</code> keyword in Rust, when you see the <code>Ref</code> type, it indicates that mutation is <strong>bound</strong> to take place in the given scope.</p>
<p>Though Rust and Scala have different idioms and philosophies, both provide robust ways to control mutation and ensure safety.</p>
<p>A Ref's concurrent counterpart in Rust would look something like</p>
<pre><code class="language-rust">type Ref&lt;T&gt; = Arc&lt;RwLock&lt;T&gt;&gt;
</code></pre>
<h3>Errors as Values</h3>
<hr />
<p>When it comes to handling errors, Rust employs an intriguing and unique approach: it treats errors as values, rather than as control flow primitives. This paradigm is inspired by functional programming languages and leverages Rust's robust algebraic data types (ADTs) to create a powerful error handling system.</p>
<p>In many other programming languages such as Java, Python, and C++, errors are usually dealt with exceptions. An exception is a special kind of object created and thrown when an error occurs, effectively interrupting the normal flow of a program. Control is then transferred to the nearest exception handler in the call stack, which is designed to address the specific error.</p>
<p>Rust, on the other hand, opts for a different approach, treating errors as ordinary data that can be returned by functions and passed around. This approach is centered around the use of ADTs, specifically enums, to model potential error states.</p>
<p>There are two primary types used for error handling in Rust:</p>
<ul>
<li><code>Option&lt;T&gt;</code>
<ul>
<li>A value that can exist (Some) or is missing (None)</li>
<li>Akin to a type-safe null</li>
</ul>
</li>
</ul>
<pre><code class="language-rust">enum Option&lt;T&gt; {
    Some(T),
    None,
}

// Find the index of the word in the Vec.
fn find_word(words: Vec&lt;&amp;str&gt;, target: &amp;str) -&gt; Option&lt;usize&gt; {
    words.iter().enumerate().find_map(
        |(index, &amp;word)| {
            if word == target {
                Some(index)
            } else {
                None
            }
        },
    )
}

#[test]
fn find_word_find_cherry() {
    let words = vec![&quot;apple&quot;, &quot;banana&quot;, &quot;cherry&quot;, &quot;date&quot;];

    let result = find_word(words, &quot;cherry&quot;);

    assert_eq!(result, Some(2));
}

</code></pre>
<ul>
<li><code>Result&lt;T, E&gt;</code>:
<ul>
<li>The result of a computation that may fail.</li>
<li>It can either be <code>Ok(T)</code> if the computation was successful, or <code>Err(E)</code> if it failed. <code>E</code> will contain information about what went wrong.</li>
</ul>
</li>
</ul>
<pre><code class="language-rust">enum Result&lt;E, T&gt; {
    Ok(T)
    Err(E)
}

fn divide(numerator: f64, denominator: f64) -&gt; Result&lt;f64, String&gt; {
    if denominator == 0.0 {
        Err(&quot;Cannot divide by zero&quot;.to_string())
    } else {
        Ok(numerator / denominator)
    }
}

#[test]
fn cannot_divide_by_zero() {
    let result = divide(5.0, 0.0);

    assert_eq!(result, Err(&quot;Cannot divide by zero&quot;.to_string()));
}

</code></pre>
<p>A key point of this design choice is to make error handling more explicit and deliberate. Unlike exception-based systems, where it can be easy to overlook handling for an error, the Rust compiler enforces the handling of <code>Result</code> and <code>Option</code> types. Note how their implementation is completely transparent, and requires no custom compiler machinery implement thanks to <a href="#zero-cost-abstractions">zero-cost abstractions</a>. This enables programmers to model other expressive error types, such as Inclusive-Or errors, with the same efficiency as Error types in the standard library.</p>
<p>By using ADTs and pattern matching, Rust effectively turns error handling from a control flow problem into a data modeling problem.</p>
<h4>Performance of Error Handling</h4>
<p>When we consider the performance implications of this design, Rust's <strong>Errors as Values</strong> approach also provides a noticeable advantage. Exception handling in traditional languages incurs a non-trivial runtime cost. When an exception is thrown, the runtime environment needs to unwind the stack until it finds a suitable exception handler.</p>
<p>In contrast, Rust's approach to handling errors incurs minimal performance penalty. This is because there's no need for stack unwinding or searching for exception handlers. Errors are returned just like any other value, and error handling is done via pattern matching, a compile-time mechanism. The &quot;happy path&quot; and the &quot;error path&quot; are treated uniformly in terms of performance.</p>
<ul>
<li>
<p>Happy path: optimal, error-free, successful execution of a program.</p>
</li>
<li>
<p>Error path: execution of a program that encounters an error/exception.</p>
</li>
</ul>
<p>Meaning that when <code>Result</code> is evaluated, the code has an additional check on the discriminant of <code>Result</code> to determine whether it is <code>Ok</code> or <code>Err</code> before continuing.</p>
<p>A caveat is that if exceptions are truly used properly - only for truly <em>exceptional</em> circumstances -  then the happy path will not incur any performance penalty. However, this is rarely the case in practice.</p>
<p>So if you have a program that uses exceptions, and the exceptions rarely occur, then exceptions are being put to good use. However, if exceptions are being thrown frequently, then your program could suffer an enormous performance penalty, which Rust opts to avoid.</p>
<p><em>TLDR: exceptions may allow the happy path to be faster, but at the expense of performance on the error path. There's no free lunch.</em></p>
<h4>Hidden control flow</h4>
<p>An understated advantage of <strong>Errors as Values</strong> over exceptions is predictability and the absence of &quot;hidden&quot; control flow paths. Exceptions can be thrown at many points in a program (and in some languages <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/throw#description">any value can be thrown</a>), and it can be non-obvious where they should be handled. On the other hand, functions in Rust that can fail have their error types explicitly defined in their signatures, making it clear what errors need to be handled.</p>
<h3>I wanna go fast</h3>
<hr />
<p><a href="https://www.youtube.com/watch?v=_qJGsSuFRIg">anyone else?</a></p>
<p>So we've all heard that Rust is fast, here are some of the reasons why.</p>
<h4>Zero-Cost Abstractions</h4>
<p>The cornerstone of Rust's performance lies in its mantra of &quot;zero-cost abstractions.&quot; The concept is simple: abstractions, which allow us to write clear and concise code, should not come at the cost of runtime performance.</p>
<h4>Ownership and Borrowing: The Ultimate Garbage Collector</h4>
<p>Garbage collection (GC) is a double-edged sword. On the one hand, it frees developers from manual memory management, preventing a whole class of bugs. On the other hand, GC comes with an overhead, and can introduce unpredictable pauses in a running program.</p>
<p>Rust's unique system of ownership and borrowing eliminates the need for a garbage collector altogether, while still providing the safety guarantees that a GC would. With Rust, memory is managed through a system of ownership with a set of rules that the compiler checks at compile-time. No garbage collector is needed.</p>
<p>By default, objects are stack-allocated, which generally is more efficient than heap allocation. However, Rust also allows explicit heap allocation using constructs like Box. This fine-grained control over memory management allows Rust programs to be incredibly efficient, minimizing runtime overhead and maximizing speed.</p>
<h4>Lightweight Concurrency with Async/Await and Tokio</h4>
<p>Multithreading and concurrency are critical for modern applications, but managing threads safely is notoriously difficult. Rust's async/await syntax and the Tokio runtime bring the benefits of asynchronous programming to Rust without the usual pitfalls.</p>
<p>Async/await in Rust is a zero-cost abstraction. Unlike in other languages, where async/await can add significant overhead, in Rust the generated code is as efficient as hand-written state machines.</p>
<p>The Tokio runtime further enhances Rust's concurrency story. It's a non-blocking I/O platform for writing asynchronous applications, with a focus on simplicity, speed, and reliability.
Tokio makes use of core threads and blocking threads.</p>
<p>As per the <a href="https://docs.rs/tokio/1.29.1/tokio/index.html#cpu-bound-tasks-and-blocking-code">docs</a>:</p>
<blockquote>
<p>Tokio provides two kinds of threads: Core threads and blocking threads</p>
</blockquote>
<blockquote>
<p>The core threads are where all asynchronous code runs, and Tokio will by default spawn one for each CPU core</p>
</blockquote>
<blockquote>
<p>The blocking threads are spawned on demand, can be used to run blocking code that would otherwise block other tasks from running</p>
</blockquote>
<p>Together, async/await and Tokio make it possible to write high-performance concurrent code that is still safe and easy to understand.</p>
<h4>Power of Optimized Builds</h4>
<p>In Rust, you typically develop in &quot;debug&quot; mode, where the compiler prioritizes compilation speed and debug information. However, when you're ready to release your application, you switch to &quot;release&quot; mode, where the compiler takes more time to apply optimizations that make your code run faster.</p>
<p>The difference between the two can be astonishing. Optimized <a href="https://doc.rust-lang.org/book/ch14-01-release-profiles.html"><code>--release</code></a> builds are often 10 to 100 times faster than debug builds.</p>
<h2>Oxidation underway</h2>
<hr />
<p>Like a metal refined by the elements, my journey into Rust has ignited a transformative process. Exploring the realms of Rust's expression-based thinking, zero-cost abstractions, and the empowering mutation in its type system has deepened my understanding of how computers and programs truly operate. Rust has been designed in face of past grievances (e.g. C, C++, Java) and future challenges.</p>
<p>I'm sure I'll write some on Rust's downsides, and want to share my experience building this website using the Leptos Web Framework. See ya then.</p>]]></content:encoded>
        </item>
    </channel>
</rss>