<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Rezha Julio</title><description>Abusing computers for Fun and Profit</description><link>https://rezhajul.io/</link><language>en</language><atom:link href="https://rezhajul.io/rss.xml" rel="self" type="application/rss+xml" xmlns:atom="http://www.w3.org/2005/Atom"/><item><title>Nobody Gets Promoted for Simplicity</title><link>https://rezhajul.io/posts/nobody-gets-promoted-for-simplicity/</link><guid isPermaLink="true">https://rezhajul.io/posts/nobody-gets-promoted-for-simplicity/</guid><description>Promotion systems in tech reward complexity, not good judgment. Here&apos;s why that keeps happening and what you can do about it.</description><pubDate>Sun, 08 Mar 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Edsger Dijkstra once wrote: &lt;em&gt;&amp;quot;Simplicity is a great virtue, but it requires hard work to achieve and education to appreciate. And to make matters worse, complexity sells better.&amp;quot;&lt;/em&gt; That was in 1984. Forty-two years later, we still haven&amp;#39;t figured it out.&lt;/p&gt;
&lt;p&gt;I&amp;#39;ve been thinking about this a lot because I keep seeing the same pattern at companies I&amp;#39;ve worked at and in conversations with friends at other shops. The engineer who ships a boring solution gets a pat on the back. The engineer who builds something elaborate gets a promotion. Everyone kind of knows this is wrong, but nobody changes it.&lt;/p&gt;
&lt;h2&gt;How promotion packets actually work&lt;/h2&gt;
&lt;p&gt;Promotion committees at most tech companies evaluate &amp;quot;impact,&amp;quot; and impact is measured by what you built, how big it was, and how many people it affected. Google&amp;#39;s promotion process, which half the industry has copied in some form, explicitly looks for &amp;quot;increasing scope&amp;quot; as you move up levels. The &lt;a href=&quot;https://staffeng.com/guides/staff-archetypes/&quot;&gt;Staff Engineer archetypes&lt;/a&gt; Will Larson describes all involve operating at larger and larger scales.&lt;/p&gt;
&lt;p&gt;None of that is inherently bad. But it creates a gravitational pull toward building more, not less. A straightforward implementation that ships in two days and runs without incident for six months is genuinely hard to write a compelling promo case around. Meanwhile, the engineer who introduced an event-driven architecture with a custom configuration framework has a story that practically writes itself, even if the team didn&amp;#39;t need any of it.&lt;/p&gt;
&lt;p&gt;Kent Beck put it well: &lt;em&gt;&amp;quot;I&amp;#39;m not a great programmer; I&amp;#39;m just a good programmer with great habits.&amp;quot;&lt;/em&gt; One of those habits is resisting the urge to build the impressive thing when the boring thing works fine.&lt;/p&gt;
&lt;h2&gt;We train people to do this&lt;/h2&gt;
&lt;p&gt;The bias starts before anyone gets hired. In system design interviews, you propose a single Postgres database behind a REST API and the interviewer pushes: &amp;quot;What about 10 million users?&amp;quot; So you add Redis, then Kafka, then a sharded database with read replicas, and by the end you&amp;#39;ve drawn an architecture diagram for a system that 99% of companies will never need. The interviewer nods approvingly. Lesson learned: simple wasn&amp;#39;t enough.&lt;/p&gt;
&lt;p&gt;To be fair, interviewers sometimes have good reasons to push on scale. They want to know if you understand distributed systems and can reason about tradeoffs under pressure. But the meta-lesson candidates absorb is that complexity impresses people. And that sticks.&lt;/p&gt;
&lt;p&gt;Design reviews have the same problem. Someone proposes a clean approach and gets hit with &amp;quot;shouldn&amp;#39;t we future-proof this?&amp;quot; So they go back and add layers for problems that might never show up. I&amp;#39;ve done this myself. I once built an elaborate plugin system for a feature that ended up having exactly one plugin, ever. The abstraction cost more time to maintain than it would have taken to just copy-paste the 30 lines of code if we ever needed a second one.&lt;/p&gt;
&lt;p&gt;Martin Fowler&amp;#39;s &lt;a href=&quot;https://martinfowler.com/bliki/Yagni.html&quot;&gt;YAGNI principle&lt;/a&gt; (&amp;quot;You Aren&amp;#39;t Gonna Need It&amp;quot;) gets nodded at in every architecture discussion and ignored in most of them. The social pressure to look like you&amp;#39;ve thought of everything is stronger than the engineering principle telling you to wait until you actually need it.&lt;/p&gt;
&lt;h2&gt;Complexity has real costs people ignore&lt;/h2&gt;
&lt;p&gt;There&amp;#39;s a paper from 2022 by researchers at Microsoft, &lt;a href=&quot;https://queue.acm.org/detail.cfm?id=3454124&quot;&gt;&amp;quot;The Space of Developer Productivity&amp;quot;&lt;/a&gt;, that found developer satisfaction and productivity are strongly correlated with codebase simplicity and low cognitive load. This lines up with what most of us already feel: working in an over-engineered codebase is exhausting. Every change requires understanding three layers of abstraction before you can touch anything.&lt;/p&gt;
&lt;p&gt;Sandi Metz talks about this in terms of &lt;a href=&quot;https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction&quot;&gt;the wrong abstraction&lt;/a&gt;: &lt;em&gt;&amp;quot;Prefer duplication over the wrong abstraction.&amp;quot;&lt;/em&gt; I&amp;#39;ve seen engineers create elaborate class hierarchies to avoid repeating 10 lines of code, then spend weeks debugging the hierarchy when requirements changed slightly. The duplication would have been fine. More than fine, actually. It would have been obvious and easy to change.&lt;/p&gt;
&lt;p&gt;Google&amp;#39;s own internal studies on code health found that the highest-rated codebases weren&amp;#39;t the most cleverly architected. They were the ones where a new team member could make a change within their first week. Simplicity isn&amp;#39;t just an aesthetic preference, it directly affects how fast a team can move.&lt;/p&gt;
&lt;h2&gt;The actual path to seniority&lt;/h2&gt;
&lt;p&gt;The confidence to not build something is weirdly rare. Anyone can add complexity, it&amp;#39;s the default mode of engineering. Leaving it out means you&amp;#39;ve actually thought through the problem and made a deliberate choice. That&amp;#39;s harder than it looks.&lt;/p&gt;
&lt;p&gt;The best senior engineers I&amp;#39;ve worked with don&amp;#39;t know more tools and patterns than everyone else. They know when not to use them. Their code makes you think &amp;quot;well, yeah, of course&amp;quot; because there&amp;#39;s nothing clever about it, nothing that makes you feel stupid. That clarity comes from experience, not from a lack of skill.&lt;/p&gt;
&lt;p&gt;Rich Hickey&amp;#39;s talk &lt;a href=&quot;https://www.infoq.com/presentations/Simple-Made-Easy/&quot;&gt;&amp;quot;Simple Made Easy&amp;quot;&lt;/a&gt; draws a distinction worth remembering: simple means &amp;quot;not compound,&amp;quot; having one role or concept. Easy means &amp;quot;near at hand,&amp;quot; familiar. They&amp;#39;re not the same thing. A microservice architecture might be easy to reach for if that&amp;#39;s what you&amp;#39;ve always done, but it&amp;#39;s not simple for a problem that a monolith handles fine.&lt;/p&gt;
&lt;h2&gt;Making simplicity visible&lt;/h2&gt;
&lt;p&gt;Your work won&amp;#39;t speak for itself. Not because it&amp;#39;s not good, but because promotion systems aren&amp;#39;t built to hear it.&lt;/p&gt;
&lt;p&gt;&amp;quot;Shipped feature X&amp;quot; doesn&amp;#39;t capture anything. But &amp;quot;evaluated three approaches including an event-driven model, determined the straightforward implementation met all current requirements, shipped in two days with zero incidents over six months&amp;quot; tells the real story. The decision not to build something is an architectural decision. Write it up like one.&lt;/p&gt;
&lt;p&gt;When someone asks &amp;quot;shouldn&amp;#39;t we future-proof this?&amp;quot; in a review, don&amp;#39;t just cave. Try: &amp;quot;Here&amp;#39;s what it would cost to add that later, and here&amp;#39;s what it costs us to add it now. I think we wait.&amp;quot; You&amp;#39;re showing your work, not pushing back.&lt;/p&gt;
&lt;p&gt;On the leadership side, the single best thing a tech lead can do is change the default question. Instead of &amp;quot;have we thought about scale?&amp;quot;, ask &amp;quot;what&amp;#39;s the simplest version we can ship, and what signals would tell us we need more?&amp;quot; That one swap puts the burden of proof on complexity instead of simplicity. And pay attention to what gets celebrated publicly. If every shout-out goes to the big complex project, that&amp;#39;s what people will optimize for. Start recognizing the engineer who deleted code, or the one who said &amp;quot;we don&amp;#39;t need this yet&amp;quot; and turned out to be right.&lt;/p&gt;
&lt;h2&gt;The uncomfortable part&lt;/h2&gt;
&lt;p&gt;Some teams genuinely value this. Others say they do but promote the opposite. If you do everything right and your organization still only rewards elaborate systems, that&amp;#39;s useful information. It tells you where you work. You can either play the game or find a place where good judgment is actually recognized.&lt;/p&gt;
&lt;p&gt;But at least you&amp;#39;ll know which one you&amp;#39;re in.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This post was inspired by &lt;a href=&quot;https://terriblesoftware.org/2026/03/03/nobody-gets-promoted-for-simplicity/&quot;&gt;a piece on terriblesoftware.org&lt;/a&gt; on the same topic, which I recommend reading.&lt;/em&gt;&lt;/p&gt;
</content:encoded></item><item><title>The AI Paradox: Coding is Easier, Engineering is Harder</title><link>https://rezhajul.io/posts/ai-paradox-coding-easier-engineering-harder/</link><guid isPermaLink="true">https://rezhajul.io/posts/ai-paradox-coding-easier-engineering-harder/</guid><description>Writing code got easy. Knowing what to build didn&apos;t.</description><pubDate>Sat, 07 Mar 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Writing code has never been this easy. AI assistants will generate boilerplate, wire up API calls, scaffold tests. You describe what you want and it appears. I use these tools every day and they&amp;#39;re genuinely useful. But I keep noticing something: the &lt;em&gt;typing&lt;/em&gt; part of my job shrank, and everything around it got bigger.&lt;/p&gt;
&lt;p&gt;I wrote about &lt;a href=&quot;/posts/compound-engineering/&quot;&gt;how the compound engineering loop works&lt;/a&gt; recently, and about &lt;a href=&quot;/posts/the-skeptic-who-got-benchmaxxed/&quot;&gt;what happened when Max Woolf stress-tested agents on real Rust projects&lt;/a&gt;. This post is about a simpler observation: the work didn&amp;#39;t get easier. It moved.&lt;/p&gt;
&lt;h3&gt;I don&amp;#39;t write code anymore, I review it&lt;/h3&gt;
&lt;p&gt;Before AI, I spent most of my time turning logic into syntax. Line by line, I understood the code because I was the one who wrote it.&lt;/p&gt;
&lt;p&gt;Now the AI writes it. My job is to read what it produced and decide if it&amp;#39;s correct. That sounds easier. It isn&amp;#39;t. When I wrote code myself, I had the full context in my head. When I&amp;#39;m reviewing something the AI generated, I have to rebuild that context from scratch. I&amp;#39;m watching for hallucinated dependencies, wrong assumptions, bugs that &lt;em&gt;look&lt;/em&gt; fine on first read. I &lt;a href=&quot;/posts/reality-ai-pair-programming-management/&quot;&gt;called this &amp;quot;management&amp;quot; before&lt;/a&gt;, and the label still fits. You spend your time auditing someone else&amp;#39;s output instead of producing your own.&lt;/p&gt;
&lt;p&gt;I spend more time staring at diffs than I ever spent typing.&lt;/p&gt;
&lt;h3&gt;The ghost bugs&lt;/h3&gt;
&lt;p&gt;AI doesn&amp;#39;t hedge or leave TODO comments. It will hand you a function that references a library that doesn&amp;#39;t exist, or call an API endpoint that was deprecated two years ago. The code compiles, the types check, and it still does the wrong thing.&lt;/p&gt;
&lt;p&gt;Human bugs tend to follow patterns I recognize. AI bugs are weirder. I once spent an hour tracking down a failure that turned out to be a hallucinated enum variant. The AI generated it, used it consistently across three files, and it wasn&amp;#39;t real. That kind of thing messes with your trust. If you&amp;#39;ve ever &lt;a href=&quot;/posts/microgpt-demystifying-llms/&quot;&gt;dug into how LLMs actually work&lt;/a&gt;, you know why: the model is completing the most statistically likely sequence, not reasoning about whether the symbol exists.&lt;/p&gt;
&lt;h3&gt;Speed without architecture is just fast debt&lt;/h3&gt;
&lt;p&gt;Generating code is cheap now. That&amp;#39;s the problem. It&amp;#39;s tempting to let the AI build out a feature quickly, ship it, move on. But &amp;quot;move fast&amp;quot; without architecture just means you pile on technical debt faster than before.&lt;/p&gt;
&lt;p&gt;Nobody needs me to write a &lt;code&gt;for&lt;/code&gt; loop in 2026. What they need is someone who can look at five AI-generated modules and figure out whether they&amp;#39;ll still work together six months from now. Woolf &lt;a href=&quot;/posts/the-skeptic-who-got-benchmaxxed/&quot;&gt;figured this out too&lt;/a&gt;. His results came from writing detailed specs and &lt;code&gt;AGENTS.md&lt;/code&gt; files &lt;em&gt;before&lt;/em&gt; letting agents touch the code. The architecture work happened up front, not after.&lt;/p&gt;
&lt;h3&gt;Sometimes I just want to type&lt;/h3&gt;
&lt;p&gt;A few developers I follow have been talking about turning off completions for side projects. No Copilot, no agents, just a text editor and your brain.&lt;/p&gt;
&lt;p&gt;I tried it last weekend. Wrote a small CLI tool by hand, maybe 200 lines. It took me four times as long as it would have with AI, and I enjoyed every minute of it. The slowness forces you to actually think about each decision instead of rubber-stamping whatever the model suggests.&lt;/p&gt;
&lt;p&gt;Not practical for work. But for the stuff I build for myself, I might keep doing it.&lt;/p&gt;
&lt;h3&gt;The trade&lt;/h3&gt;
&lt;p&gt;AI made me faster at producing code and slower at everything else. I review more and debug stranger problems. The writing is easy now. The engineering never was.&lt;/p&gt;
</content:encoded></item><item><title>Demystifying AI: Learning LLMs Through 200 Lines of MicroGPT</title><link>https://rezhajul.io/posts/microgpt-demystifying-llms/</link><guid isPermaLink="true">https://rezhajul.io/posts/microgpt-demystifying-llms/</guid><description>Large Language Models seem like magic, but they are just math. Let&apos;s break down Andrej Karpathy&apos;s MicroGPT to understand the core mechanics.</description><pubDate>Fri, 06 Mar 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Large Language Models (LLMs) are often treated as black boxes. We feed them prompts, and they spit out code, essays, or architectural advice. But underneath the billions of parameters and GPU clusters, what is actually happening?&lt;/p&gt;
&lt;p&gt;On February 12, 2026, Andrej Karpathy published &lt;a href=&quot;https://gist.github.com/karpathy/8627fe009c40f57531cb18360106ce95&quot;&gt;MicroGPT&lt;/a&gt; -- a single Python file, about 200 lines, zero dependencies, that trains and runs a GPT from scratch. No PyTorch. No TensorFlow. The only imports are &lt;code&gt;os&lt;/code&gt;, &lt;code&gt;math&lt;/code&gt;, and &lt;code&gt;random&lt;/code&gt;. He called it an &amp;quot;art project,&amp;quot; and I think that&amp;#39;s underselling it.&lt;/p&gt;
&lt;p&gt;MicroGPT is the end point of a long series of Karpathy&amp;#39;s educational projects: &lt;a href=&quot;https://github.com/karpathy/micrograd&quot;&gt;micrograd&lt;/a&gt; (autograd engine), &lt;a href=&quot;https://github.com/karpathy/makemore&quot;&gt;makemore&lt;/a&gt; (character-level generation), &lt;a href=&quot;https://github.com/karpathy/nanoGPT&quot;&gt;nanoGPT&lt;/a&gt; (practical GPT training), and finally this -- the whole thing distilled into one file. As he put it: &amp;quot;This file is the complete algorithm. Everything else is just efficiency.&amp;quot;&lt;/p&gt;
&lt;h2&gt;What it actually does&lt;/h2&gt;
&lt;p&gt;MicroGPT trains on a dataset of about 32,000 baby names. It learns the statistical patterns of how names are spelled, then generates new fake names that sound real but never existed. Names like &amp;quot;karia&amp;quot; or &amp;quot;alend.&amp;quot; It&amp;#39;s a character-level language model, so each letter is a token.&lt;/p&gt;
&lt;p&gt;That might sound trivial compared to ChatGPT, but the mechanism is identical. ChatGPT is this same core loop (predict next token, sample, repeat) scaled up with more data, more parameters, and post-training to make it conversational. When you chat with it, the system prompt, your message, and its reply are all just tokens in a sequence. The model is completing the document one token at a time, same as MicroGPT completing a name.&lt;/p&gt;
&lt;h2&gt;Why it is worth reading&lt;/h2&gt;
&lt;p&gt;Production models like Llama 3 or GPT-4 bury their core logic under layers of optimizations, distributed training, and hardware-specific tweaks. Good luck figuring out how attention actually works by reading that code.&lt;/p&gt;
&lt;p&gt;MicroGPT strips all of that away. The model has exactly 4,192 parameters. GPT-2 had 1.6 billion. Modern LLMs have hundreds of billions. But the architecture is the same shape: embeddings, attention, feed-forward networks, residual connections. Just much, much smaller.&lt;/p&gt;
&lt;p&gt;What makes MicroGPT unusual is that it does not just implement the model. It implements everything from scratch. The entire file contains:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A character-level &lt;strong&gt;tokenizer&lt;/strong&gt; (each unique character gets an integer ID)&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;autograd engine&lt;/strong&gt; (the &lt;code&gt;Value&lt;/code&gt; class, which tracks gradients through the computation graph)&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;GPT-2-like neural network&lt;/strong&gt; (with RMSNorm instead of LayerNorm, ReLU instead of GeLU, no biases)&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Adam optimizer&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;training loop&lt;/strong&gt; and &lt;strong&gt;inference loop&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The autograd part is the thing that surprised me. In normal deep learning, you use PyTorch or JAX to compute gradients automatically. Here, Karpathy reimplements backpropagation from scratch in about 30 lines using a &lt;code&gt;Value&lt;/code&gt; class that wraps scalars and tracks their dependencies. Every addition, multiplication, and activation function creates a node in a computation graph. When you call &lt;code&gt;loss.backward()&lt;/code&gt;, it walks the graph in reverse and computes gradients using the chain rule. The same thing PyTorch does, just one scalar at a time instead of batched tensors on a GPU.&lt;/p&gt;
&lt;h2&gt;The actual code&lt;/h2&gt;
&lt;p&gt;Here is the model function, slightly cleaned up for readability. In the real code, it is a single flat function, not a class:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def gpt(token_id, pos_id, keys, values):
    tok_emb = state_dict[&amp;#39;wte&amp;#39;][token_id]       # token embedding
    pos_emb = state_dict[&amp;#39;wpe&amp;#39;][pos_id]          # position embedding
    x = [t + p for t, p in zip(tok_emb, pos_emb)]
    x = rmsnorm(x)

    for li in range(n_layer):
        # 1) Multi-head Attention block
        x_residual = x
        x = rmsnorm(x)
        q = linear(x, state_dict[f&amp;#39;layer{li}.attn_wq&amp;#39;])
        k = linear(x, state_dict[f&amp;#39;layer{li}.attn_wk&amp;#39;])
        v = linear(x, state_dict[f&amp;#39;layer{li}.attn_wv&amp;#39;])
        keys[li].append(k)
        values[li].append(v)
        # ... attention computation with Q, K, V ...
        x = [a + b for a, b in zip(x, x_residual)]  # residual connection

        # 2) MLP block
        x_residual = x
        x = rmsnorm(x)
        x = linear(x, state_dict[f&amp;#39;layer{li}.mlp_fc1&amp;#39;])
        x = [xi.relu() for xi in x]
        x = linear(x, state_dict[f&amp;#39;layer{li}.mlp_fc2&amp;#39;])
        x = [a + b for a, b in zip(x, x_residual)]  # residual connection

    logits = linear(x, state_dict[&amp;#39;lm_head&amp;#39;])
    return logits
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;No classes. No inheritance. No framework. Just functions that take lists of &lt;code&gt;Value&lt;/code&gt; objects and return lists of &lt;code&gt;Value&lt;/code&gt; objects. &lt;code&gt;linear&lt;/code&gt; is a matrix-vector multiply. &lt;code&gt;rmsnorm&lt;/code&gt; normalizes activations. &lt;code&gt;softmax&lt;/code&gt; turns raw scores into probabilities. Each one is two or three lines long.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;state_dict&lt;/code&gt; is a plain dictionary of weight matrices, each made of &lt;code&gt;Value&lt;/code&gt; objects that know how to compute their own gradients. That&amp;#39;s how the whole thing fits in 200 lines.&lt;/p&gt;
&lt;h2&gt;Self-attention, explained concretely&lt;/h2&gt;
&lt;p&gt;Self-Attention is what makes this a transformer instead of a bag-of-characters model. It is how the model learns that the &amp;quot;m&amp;quot; in &amp;quot;emma&amp;quot; relates to the &amp;quot;e&amp;quot; before it.&lt;/p&gt;
&lt;p&gt;Each token gets projected into three vectors: a Query (Q), a Key (K), and a Value (V). The Query says &amp;quot;what am I looking for?&amp;quot;, the Key says &amp;quot;what do I contain?&amp;quot;, and the Value says &amp;quot;what information do I carry?&amp;quot; The model computes dot products between the current Query and all cached Keys, runs them through softmax to get attention weights, then takes a weighted sum of the Values.&lt;/p&gt;
&lt;p&gt;In MicroGPT&amp;#39;s code, this is fully explicit. You can see each dot product being computed in a loop, one element at a time. In PyTorch, the same computation is hidden inside batched matrix multiplications. Same math, different packaging.&lt;/p&gt;
&lt;h2&gt;Running it yourself&lt;/h2&gt;
&lt;p&gt;The whole thing is a single file. You download it and run &lt;code&gt;python train.py&lt;/code&gt;. That&amp;#39;s it. No &lt;code&gt;pip install&lt;/code&gt;, no environment setup, no CUDA drivers.&lt;/p&gt;
&lt;p&gt;On Karpathy&amp;#39;s MacBook, training takes about a minute. After 500 steps, the loss drops from around 3.3 (random guessing among 27 tokens) to about 2.37, and you start seeing plausible generated names in the output.&lt;/p&gt;
&lt;p&gt;You can swap out the dataset and train on anything character-level -- city names, Pokemon, short poems. The rest of the code does not change. You can also tweak the hyper-parameters to see what happens: change &lt;code&gt;n_embd&lt;/code&gt;, &lt;code&gt;n_layer&lt;/code&gt;, &lt;code&gt;n_head&lt;/code&gt;, and watch how it affects what the model learns.&lt;/p&gt;
&lt;p&gt;If you want the full guided walkthrough, Karpathy wrote a detailed companion blog post at &lt;a href=&quot;http://karpathy.github.io/2026/02/12/microgpt/&quot;&gt;karpathy.github.io/2026/02/12/microgpt&lt;/a&gt; that walks through every section of the code. There is also a &lt;a href=&quot;https://colab.research.google.com/drive/1vyN5zo6rqUp_dYNbT4Yrco66zuWCZKoN?usp=sharing&quot;&gt;Google Colab notebook&lt;/a&gt; if you want to run it without downloading anything.&lt;/p&gt;
&lt;h2&gt;The gap between this and ChatGPT&lt;/h2&gt;
&lt;p&gt;MicroGPT is the algorithm. ChatGPT is that same algorithm with a long list of engineering on top. Karpathy&amp;#39;s blog post spells out the differences section by section, and none of them change the core loop. The tokenizer goes from single characters to BPE with 100K tokens. The autograd goes from scalar &lt;code&gt;Value&lt;/code&gt; objects to batched tensor operations on GPUs. The 4,192 parameters become hundreds of billions. The single-document training steps become batches of millions of tokens processed across thousands of GPUs for months.&lt;/p&gt;
&lt;p&gt;But the structure is the same. Embed tokens, add positions, pass through attention and MLP blocks on a residual stream, project to logits, compute loss, backpropagate, update parameters. That is what MicroGPT makes visible.&lt;/p&gt;
&lt;p&gt;If you are going to spend your days working alongside these tools, it helps to know what they are actually doing. MicroGPT makes that concrete in an afternoon of reading.&lt;/p&gt;
</content:encoded></item><item><title>Bell Curve Promotions Are Broken</title><link>https://rezhajul.io/posts/bell-curve-promotions-are-broken/</link><guid isPermaLink="true">https://rezhajul.io/posts/bell-curve-promotions-are-broken/</guid><description>Forced ranking systems pit teammates against each other and reward politics over performance. The math doesn&apos;t even work.</description><pubDate>Thu, 05 Mar 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Here&amp;#39;s a system that a lot of tech companies use for promotions: take all the engineers on a team, rank them against each other, and force the results into a bell curve. A fixed percentage gets &amp;quot;exceeds expectations,&amp;quot; most land in &amp;quot;meets expectations,&amp;quot; and someone has to be the bottom. The quota is set before anyone looks at actual work.&lt;/p&gt;
&lt;p&gt;I have a problem with this. Several, actually.&lt;/p&gt;
&lt;h2&gt;The math is wrong&lt;/h2&gt;
&lt;p&gt;A bell curve describes natural distributions in large populations. Height, test scores across thousands of students, manufacturing defects in a factory producing millions of units. It works when you have large sample sizes and truly random variation.&lt;/p&gt;
&lt;p&gt;An engineering team is not a large population. It&amp;#39;s 6 to 15 people, hired through the same interview pipeline, filtered by the same bar. You&amp;#39;re looking at a pre-selected group. Forcing a normal distribution onto a small, filtered sample is just bad statistics. If you hired well, most of your team should be performing well. That&amp;#39;s the whole point of a hiring bar.&lt;/p&gt;
&lt;p&gt;A 2012 paper by Ernest O&amp;#39;Boyle Jr. and Herman Aguinis in Personnel Psychology analyzed over 600,000 observations across multiple industries. Their finding: individual performance follows a power law distribution, not a normal distribution. A small number of people produce outsized results, and the rest cluster much more tightly than a bell curve would predict. The bell curve assumption is wrong even at scale. At team level, it&amp;#39;s fiction.&lt;/p&gt;
&lt;h2&gt;It punishes good teams&lt;/h2&gt;
&lt;p&gt;Say you&amp;#39;re on a team where everyone is genuinely strong. Maybe you hired well, maybe the team has been together long enough that everyone has leveled up. Under forced ranking, someone still has to be at the bottom. Not because they&amp;#39;re bad, but because the spreadsheet needs a name in that slot.&lt;/p&gt;
&lt;p&gt;I&amp;#39;ve seen this play out. A solid engineer doing good, consistent work gets a &amp;quot;below expectations&amp;quot; rating because someone had to. Their morale craters, they start interviewing, and within a few months you lose someone you didn&amp;#39;t want to lose. All because a distribution model required a sacrifice.&lt;/p&gt;
&lt;p&gt;The reverse is also true. On a weak team, mediocre work can earn a top rating because relative performance is all that matters. The system doesn&amp;#39;t measure how good you are. It measures how you compare to the people sitting near you.&lt;/p&gt;
&lt;h2&gt;It kills collaboration&lt;/h2&gt;
&lt;p&gt;Here&amp;#39;s the part that really gets me. If my promotion depends on being ranked higher than my teammates, why would I help them? Every hour I spend unblocking a colleague, mentoring a junior engineer, or doing thankless infrastructure work is an hour I&amp;#39;m not spending on visible, promotable work.&lt;/p&gt;
&lt;p&gt;The system creates a zero-sum game inside the team. Your gain is my loss. That&amp;#39;s a terrible incentive structure for engineering, where the best outcomes come from people working together. You want engineers sharing knowledge, reviewing each other&amp;#39;s code thoroughly, stepping in when someone is stuck. Forced ranking makes all of that irrational.&lt;/p&gt;
&lt;p&gt;Ed Lazear at Stanford documented this in his research on tournament-based compensation. When workers compete directly against each other for fixed rewards, sabotage and information hoarding increase. He studied sales teams, but the dynamic applies everywhere. Put people in a tournament and they play the tournament, not the actual game.&lt;/p&gt;
&lt;h2&gt;It rewards performance theater&lt;/h2&gt;
&lt;p&gt;When you know you&amp;#39;re being ranked, you optimize for visibility, not impact. You pick projects that look impressive in a review packet. You make sure your name is on the high-profile launch, even if your actual contribution was small. You avoid the risky cleanup work that might not pan out.&lt;/p&gt;
&lt;p&gt;I&amp;#39;ve watched engineers avoid taking on critical tech debt work because it&amp;#39;s hard to explain in a promotion review. &amp;quot;I spent three months making the deploy pipeline 40% faster&amp;quot; doesn&amp;#39;t sound as good as &amp;quot;I led the redesign of the recommendation system.&amp;quot; One of those has more impact on the team&amp;#39;s daily life. The other one gets promoted.&lt;/p&gt;
&lt;p&gt;The system selects for people who are good at looking good, which is a different skill from being good at engineering. Over enough cycles, your senior ranks fill up with people who optimized for the game rather than the work.&lt;/p&gt;
&lt;h2&gt;Manager discretion becomes kingmaker&lt;/h2&gt;
&lt;p&gt;In practice, forced ranking comes down to your manager arguing for you in a calibration meeting. If your manager is good at internal politics, you get a better ranking. If they&amp;#39;re new, or quiet, or have too many reports to advocate for individually, you lose out.&lt;/p&gt;
&lt;p&gt;This means your career progression depends less on your actual engineering output and more on your manager&amp;#39;s social capital. Two engineers doing identical work on different teams can get wildly different ratings because of calibration room dynamics. The system pretends to be objective, but the rankings are produced through negotiation between managers with competing incentives.&lt;/p&gt;
&lt;h2&gt;What companies say vs. what happens&lt;/h2&gt;
&lt;p&gt;The stated goal of forced ranking is always the same: differentiate performance, reward the best, and manage out underperformers. It sounds reasonable in a slide deck.&lt;/p&gt;
&lt;p&gt;What actually happens is that people learn the meta-game. Senior engineers time their project deliveries around review cycles. People hoard credit and avoid shared ownership. Teams develop unspoken agreements about whose &amp;quot;turn&amp;quot; it is. The process consumes weeks of management time that could be spent on actual work.&lt;/p&gt;
&lt;p&gt;Microsoft used stack ranking for years and famously abandoned it in 2013. Former employees described a culture where people spent more energy on internal positioning than on building good products. Kurt Eichenwald&amp;#39;s Vanity Fair piece on Microsoft&amp;#39;s &amp;quot;lost decade&amp;quot; pointed directly at stack ranking as a factor. When they dropped it, multiple teams reported improved collaboration within months.&lt;/p&gt;
&lt;h2&gt;The alternative isn&amp;#39;t &amp;quot;everyone gets a trophy&amp;quot;&lt;/h2&gt;
&lt;p&gt;The common defense of bell curves is that without them, managers would rate everyone highly and nobody would be held accountable. And yes, rating inflation is a real problem. But forced distribution doesn&amp;#39;t fix it. It just replaces one distortion with another.&lt;/p&gt;
&lt;p&gt;Better alternatives exist. You can evaluate people against defined criteria for their level rather than against each other. You can separate performance feedback from compensation decisions so the review process isn&amp;#39;t contaminated by budget constraints. You can use peer feedback and work artifacts instead of manager-mediated rankings.&lt;/p&gt;
&lt;p&gt;Some companies do this already. The ones I&amp;#39;ve talked to engineers at tend to have higher retention and, anecdotally, less internal politics. It&amp;#39;s more work for managers, which is probably why many companies default to the bell curve. It&amp;#39;s easier to administer. But &amp;quot;easy to administer&amp;quot; is a weird reason to use a system that makes your best people want to leave.&lt;/p&gt;
&lt;h2&gt;The real cost&lt;/h2&gt;
&lt;p&gt;Every time a forced ranking system pushes out a good engineer who happened to be on a strong team, or promotes a mediocre one who happened to be on a weak team, the company pays for it. In hiring costs, in lost institutional knowledge, in the slow erosion of trust that makes people stop taking risks.&lt;/p&gt;
&lt;p&gt;I keep coming back to this: the bell curve is a model for describing populations, not a tool for managing people. Using it to decide who gets promoted is like using a thermometer to decide what to have for dinner. The instrument isn&amp;#39;t designed for the job.&lt;/p&gt;
&lt;p&gt;If your promotion system requires that some percentage of good engineers be labeled as underperformers, the system is broken. Not the engineers.&lt;/p&gt;
</content:encoded></item><item><title>The Skeptic Who Got Benchmaxxed: What Actually Changed About AI Coding Agents</title><link>https://rezhajul.io/posts/the-skeptic-who-got-benchmaxxed/</link><guid isPermaLink="true">https://rezhajul.io/posts/the-skeptic-who-got-benchmaxxed/</guid><description>I used to dismiss AI coding agents as expensive autocomplete. Then Opus 4.5 dropped, and a data scientist started shipping Rust crates that beat numpy. A breakdown of Max Woolf&apos;s journey and why the AGENTS.md file might be the most underrated tool in the stack.</description><pubDate>Wed, 04 Mar 2026 13:00:00 GMT</pubDate><content:encoded>&lt;p&gt;There are two types of AI coding blog posts. The first kind is breathless hype: &amp;quot;I built a SaaS in 30 minutes with Claude!&amp;quot; The second kind is breathless doom: &amp;quot;AI is destroying the craft of programming!&amp;quot;&lt;/p&gt;
&lt;p&gt;Max Woolf&amp;#39;s &lt;a href=&quot;https://minimaxir.com/2026/02/ai-agent-coding/&quot;&gt;recent post&lt;/a&gt; is neither. It&amp;#39;s the rare piece where someone who was publicly skeptical about AI agents changed their mind, showed the receipts, and still managed to be annoyed about it.&lt;/p&gt;
&lt;p&gt;I want to break down what he actually did, because buried under 5,000 words of dry humor is a workflow that I think most developers are sleeping on.&lt;/p&gt;
&lt;h2&gt;The backstory: a professional skeptic&lt;/h2&gt;
&lt;p&gt;Woolf is a data scientist in San Francisco. Last May, he wrote a post titled &lt;a href=&quot;https://minimaxir.com/2025/05/llm-use/&quot;&gt;As an Experienced LLM User, I Actually Don&amp;#39;t Use Generative LLMs Often&lt;/a&gt;. His position was reasonable: LLMs can answer simple coding questions, but agents are unpredictable, expensive, and overhyped. He was open to changing his mind if the tech improved.&lt;/p&gt;
&lt;p&gt;Fast forward to November. Anthropic dropped Opus 4.5 right before Thanksgiving. Woolf noticed the timing was suspicious. Companies bury bad announcements on holidays. He had no Thanksgiving plans, so he tested Opus anyway.&lt;/p&gt;
&lt;p&gt;What he found was not what he expected.&lt;/p&gt;
&lt;h2&gt;The AGENTS.md revelation&lt;/h2&gt;
&lt;p&gt;Before touching Opus, Woolf did something that most people skip: he wrote an &lt;code&gt;AGENTS.md&lt;/code&gt; file. If you&amp;#39;re not familiar, it&amp;#39;s a file you put in your project root that controls agent behavior, like a system prompt for your codebase.&lt;/p&gt;
&lt;p&gt;This is where it gets interesting.&lt;/p&gt;
&lt;p&gt;Most people complain about agents generating emoji-laden, over-commented, verbose garbage. Woolf&amp;#39;s fix was simple. He added rules:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-md&quot;&gt;**NEVER** use emoji, or unicode that emulates emoji (e.g. ✓, ✗).
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-md&quot;&gt;**MUST** avoid including redundant comments which are tautological
or self-demonstrating
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;He added preferences for &lt;code&gt;uv&lt;/code&gt; over base Python, &lt;code&gt;polars&lt;/code&gt; over &lt;code&gt;pandas&lt;/code&gt;, secrets in &lt;code&gt;.env&lt;/code&gt;, and a dozen other opinionated constraints. Not telling the agent &lt;em&gt;what&lt;/em&gt; to build, but &lt;em&gt;how&lt;/em&gt; to build it.&lt;/p&gt;
&lt;p&gt;He claims the difference between having and not having this file is immediately obvious. I believe him. I&amp;#39;ve seen the same pattern with my own agents. When I &lt;a href=&quot;/posts/reality-ai-pair-programming-management/&quot;&gt;migrated my home server with Cici&lt;/a&gt;, the thing that made it work wasn&amp;#39;t the model. It was writing &lt;code&gt;TOOLS.md&lt;/code&gt; and &lt;code&gt;MEMORY.md&lt;/code&gt; to give the agent context about my network, my paths, my conventions. Without those files, Cici was guessing. With them, she executed moves without asking &amp;quot;which folder?&amp;quot;&lt;/p&gt;
&lt;p&gt;Woolf&amp;#39;s approach is the same idea, scaled up. His &lt;code&gt;AGENTS.md&lt;/code&gt; is essentially what I called &lt;a href=&quot;/posts/reality-ai-pair-programming-management/&quot;&gt;&amp;quot;the API for your AI agent&amp;quot;&lt;/a&gt; -- except he&amp;#39;s taken it further with granular formatting rules and tool preferences.&lt;/p&gt;
&lt;p&gt;His Python and &lt;a href=&quot;https://gist.github.com/minimaxir/068ef4137a1b6c1dcefa785349c91728&quot;&gt;Rust&lt;/a&gt; &lt;code&gt;AGENTS.md&lt;/code&gt; files are public. Worth stealing.&lt;/p&gt;
&lt;h2&gt;The prompting strategy nobody talks about&lt;/h2&gt;
&lt;p&gt;Here&amp;#39;s the part that doesn&amp;#39;t fit in a tweet. Woolf doesn&amp;#39;t just type &lt;code&gt;build me a thing&lt;/code&gt; into Claude Code. He writes full Markdown spec files, tracked in git, with explicit constraints. Then he tells the agent: implement this file.&lt;/p&gt;
&lt;p&gt;His YouTube scraper prompt is a good example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-md&quot;&gt;Create a robust Python script that, given a YouTube Channel ID,
can scrape the YouTube Data API and store all video metadata in
a SQLite database. The YOUTUBE_API_KEY is present in `.env`.

You MUST obey ALL the FOLLOWING rules:
- Do not use the Google Client SDK. Use the REST API with `httpx`.
- Include sensible aggregate metrics.
- Include `channel_id` and `retrieved_at` in the database schema.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice: he specifies the HTTP library. He specifies the schema columns. He bans the SDK he doesn&amp;#39;t want. This is not &amp;quot;vibe coding.&amp;quot; This is writing a spec, which is what senior engineers do anyway, except now the spec gets executed immediately. I wrote about this exact shift in &lt;a href=&quot;/posts/next-level-coding-with-amp/&quot;&gt;my Amp post&lt;/a&gt;: plan first, code second. The code is just the implementation detail. Woolf arrived at the same conclusion independently, but his specs are even more constrained. He leaves the agent zero wiggle room.&lt;/p&gt;
&lt;p&gt;The result worked first try. 20,000 videos scraped. Clean, Pythonic code. No Sonnet 4.5 slop.&lt;/p&gt;
&lt;h2&gt;From Python to Rust: where it gets wild&lt;/h2&gt;
&lt;p&gt;Once Woolf confirmed Opus could handle Python, he did what any sane person would do: he asked it to write Rust.&lt;/p&gt;
&lt;p&gt;Historically, LLMs have been terrible at Rust. The language is niche, the borrow checker is unforgiving, and there&amp;#39;s not enough training data for LLMs to fake their way through. Woolf had been testing LLMs on Rust for years. They always failed.&lt;/p&gt;
&lt;p&gt;They stopped failing.&lt;/p&gt;
&lt;p&gt;Here&amp;#39;s what he built, all with Opus and later Codex:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;icon-to-image&lt;/strong&gt;: Renders Font Awesome icons into images at arbitrary resolution. Written in Rust with Python bindings via PyO3. Features supersampled antialiasing, transparent backgrounds, and PNG/WebP output. He started with &lt;code&gt;fontdue&lt;/code&gt; for speed but discovered it can&amp;#39;t render curves properly at high resolution (it approximates). Told Opus, Opus swapped in &lt;code&gt;ab_glyph&lt;/code&gt; without breaking anything.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;miditui&lt;/strong&gt;: A MIDI composer and playback DAW that runs entirely in a terminal. Yes, a terminal. Uses &lt;code&gt;ratatui&lt;/code&gt; for the UI and &lt;code&gt;rodio&lt;/code&gt; for audio. Opus couldn&amp;#39;t see the terminal output and still implemented correct UI changes. Woolf fell back on his QA engineering background to find bugs manually and describe them to the agent.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ballin&lt;/strong&gt;: A terminal physics simulator rendering 10,000+ bouncing balls using Braille unicode characters for sub-pixel resolution. Uses the &lt;code&gt;rapier2d&lt;/code&gt; physics engine. Built from 14 iterative prompts, each one a detailed spec, each followed by manual review and git commit.&lt;/p&gt;
&lt;p&gt;These are not toy demos. The &lt;a href=&quot;https://github.com/minimaxir/ballin/blob/main/PROMPTS.md&quot;&gt;ballin PROMPTS.md&lt;/a&gt; shows the full development history: 14 numbered prompts, each one building on the last, each one specific enough that the agent couldn&amp;#39;t misinterpret intent. Same pattern I use with Amp: decompose, specify, implement, review, commit. The difference is Woolf does it manually with Markdown files instead of using tooling to automate the decomposition. The principle is identical.&lt;/p&gt;
&lt;h2&gt;The benchmaxxing pipeline&lt;/h2&gt;
&lt;p&gt;Here&amp;#39;s where Woolf&amp;#39;s post stops being an experience report and starts being something else.&lt;/p&gt;
&lt;p&gt;He developed an 8-step pipeline for building machine learning algorithms in Rust:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Implement the algorithm with benchmarks&lt;/li&gt;
&lt;li&gt;Clean up code and optimize&lt;/li&gt;
&lt;li&gt;Scan for algorithmic weaknesses, describe problem/solution/impact for each&lt;/li&gt;
&lt;li&gt;Optimize until ALL benchmarks run 60% faster. Repeat until convergence. Don&amp;#39;t game the benchmarks&lt;/li&gt;
&lt;li&gt;Create tuning profiles for CPU thread saturation and parallelization&lt;/li&gt;
&lt;li&gt;Add Python bindings via PyO3&lt;/li&gt;
&lt;li&gt;Create Python comparison benchmarks against existing libraries&lt;/li&gt;
&lt;li&gt;Accuse the agent of cheating, then make it minimize output differences against a known-good implementation&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Steps 4 and 8 are the clever ones. Step 4 gives the agent a quantifiable target instead of a vague &amp;quot;make it faster.&amp;quot; Step 8 is a built-in integrity check: even if the agent found wild optimizations, the output must still match scikit-learn.&lt;/p&gt;
&lt;p&gt;Then he discovered something weird: chaining different models produces compound speedups. Codex optimizes the code 1.5-2x. Then Opus, working on the already-optimized code, somehow finds &lt;em&gt;more&lt;/em&gt; optimizations. Different models apparently have different optimization strategies. This is something I&amp;#39;ve been experimenting with too. In my &lt;a href=&quot;/posts/running-amp-without-using-credits/&quot;&gt;CLIProxyAPI setup&lt;/a&gt;, I route different Amp modes to different models: Opus for smart mode, Gemini for rush. Woolf&amp;#39;s approach is more deliberate. He chains them &lt;em&gt;sequentially on the same codebase&lt;/em&gt; to compound their different optimization instincts.&lt;/p&gt;
&lt;p&gt;The numbers he reports:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Algorithm&lt;/th&gt;
&lt;th&gt;vs. Existing Rust&lt;/th&gt;
&lt;th&gt;vs. Python&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;UMAP&lt;/td&gt;
&lt;td&gt;2-10x faster&lt;/td&gt;
&lt;td&gt;9-30x faster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HDBSCAN&lt;/td&gt;
&lt;td&gt;23-100x faster&lt;/td&gt;
&lt;td&gt;3-10x faster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GBDT&lt;/td&gt;
&lt;td&gt;1.1-1.5x faster&lt;/td&gt;
&lt;td&gt;24-42x faster fit&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;p&gt;If these numbers are real, and he open-sourced &lt;a href=&quot;https://github.com/minimaxir/nndex&quot;&gt;nndex&lt;/a&gt; as proof, that&amp;#39;s not incremental improvement. That&amp;#39;s a different category of result.&lt;/p&gt;
&lt;h2&gt;nndex: the proof of concept&lt;/h2&gt;
&lt;p&gt;The project he released to back up his claims is &lt;code&gt;nndex&lt;/code&gt;: an in-memory vector store for exact nearest-neighbor search, written in Rust with Python bindings. It&amp;#39;s conceptually simple (cosine similarity reduces to dot products on normalized vectors) but the implementation is anything but.&lt;/p&gt;
&lt;p&gt;The Rust code uses:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;simsimd&lt;/code&gt; for SIMD-accelerated dot products&lt;/li&gt;
&lt;li&gt;&lt;code&gt;rayon&lt;/code&gt; for parallel iteration with adaptive thresholds&lt;/li&gt;
&lt;li&gt;Five different single-query strategies and five batch strategies, selected at runtime based on matrix shape&lt;/li&gt;
&lt;li&gt;&lt;code&gt;wgpu&lt;/code&gt; for GPU compute (Metal/Vulkan/D3D12) with custom WGSL shaders&lt;/li&gt;
&lt;li&gt;An IVF approximate index with spherical k-means for when exact search is overkill&lt;/li&gt;
&lt;li&gt;LRU caching, denormal flushing, zero-copy numpy interop via PyO3&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The benchmark results against numpy (which uses optimized BLAS under the hood):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Low-to-medium dimensions (&amp;lt; 256)&lt;/strong&gt;: nndex wins by 2-9x&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Single query on 50k rows&lt;/strong&gt;: 4.9x faster&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;All results&lt;/strong&gt;: 99.5-100% top-k overlap with numpy. Similarity deltas under 1e-6&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The whole thing is built with &lt;code&gt;#![forbid(unsafe_code)]&lt;/code&gt;. All performance comes from safe Rust, SIMD via safe wrappers, and GPU dispatch. No &lt;code&gt;unsafe&lt;/code&gt; blocks.&lt;/p&gt;
&lt;h2&gt;What I actually take away from this&lt;/h2&gt;
&lt;p&gt;Woolf&amp;#39;s post is long and covers a lot of ground. Here&amp;#39;s what I think matters:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. AGENTS.md is not optional.&lt;/strong&gt; It&amp;#39;s the difference between useful output and garbage. If you&amp;#39;re getting bad results from coding agents, this is the first thing to fix. Not the model. Not the prompt. The persistent context file that shapes every interaction. I&amp;#39;ve been saying this since my &lt;a href=&quot;/posts/pair-programming-with-a-lobster/&quot;&gt;first week with Clawdbot&lt;/a&gt; -- the agent remembered my preferences because I wrote them down. Woolf&amp;#39;s &lt;code&gt;AGENTS.md&lt;/code&gt; is the same concept, but production-grade.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. Prompting is spec writing.&lt;/strong&gt; The people getting good results aren&amp;#39;t typing casual requests. They&amp;#39;re writing detailed specifications with explicit constraints, tracking them in git, and referencing them by filename. This is just good engineering practice that happens to also work for agents.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. The &amp;quot;literal genie&amp;quot; framing is perfect.&lt;/strong&gt; Agents don&amp;#39;t read minds. They don&amp;#39;t infer your preferences. They do exactly what you say, including the things you forgot to say. You need to be specific about what you DON&amp;#39;T want as much as what you do.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4. Domain expertise is the multiplier.&lt;/strong&gt; Woolf knew enough about MIDI, physics engines, font rendering, and machine learning algorithms to audit agent output and catch mistakes. Without that knowledge, the same prompts would produce the same bugs but nobody would notice until production. I learned this the hard way when &lt;a href=&quot;/posts/reality-ai-pair-programming-management/&quot;&gt;Cici tried to hack my router&lt;/a&gt; with 2015-era exploits. If I didn&amp;#39;t know my TP-Link AX used encrypted tokens, I would have let the agent waste an hour on a dead end.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;5. Chaining models is underexplored.&lt;/strong&gt; The idea that Codex and Opus find different optimizations, and that running them in sequence produces compound speedups, caught me off guard. It&amp;#39;s like getting a second opinion from a doctor who went to a different medical school.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;6. QA skills are the new coding skills.&lt;/strong&gt; Woolf explicitly mentions that his background as a black-box QA engineer was critical. Finding bugs in agent code is different from writing bug-free code. If you spent years in QA, you might be better positioned for the agent era than the 10x developer next to you.&lt;/p&gt;
&lt;h2&gt;The uncomfortable conclusion&lt;/h2&gt;
&lt;p&gt;I&amp;#39;ve written before about how &lt;a href=&quot;/posts/reality-ai-pair-programming-management/&quot;&gt;AI pair programming is really management&lt;/a&gt;. Woolf&amp;#39;s experience confirms this but takes it further. He&amp;#39;s not managing one agent on one task. He&amp;#39;s running a pipeline: spec, implement, benchmark, optimize, chain, verify. The agent is a tool in the pipeline, not the pipeline itself. This is the &lt;a href=&quot;/posts/compound-engineering/&quot;&gt;compound engineering&lt;/a&gt; loop I wrote about -- Plan, Work, Review, Compound -- except Woolf&amp;#39;s &amp;quot;Compound&amp;quot; step is literally running benchmarks and feeding the results back into the next optimization pass.&lt;/p&gt;
&lt;p&gt;The uncomfortable part is that this workflow produces results that are hard to wave away. You can&amp;#39;t say &amp;quot;it&amp;#39;s just regurgitating GitHub&amp;quot; when the code is faster than everything on GitHub. You can&amp;#39;t say &amp;quot;it&amp;#39;s just autocomplete&amp;quot; when it&amp;#39;s implementing physics engines and ML algorithms from specs.&lt;/p&gt;
&lt;p&gt;You &lt;em&gt;can&lt;/em&gt; say &amp;quot;well, I could do that myself given enough time.&amp;quot; And you&amp;#39;d be right. But Woolf addresses this too: the session limits on these tools forced him into a habit of coding for fun an hour every day. The agents didn&amp;#39;t replace his programming. They let him tackle projects that would have taken months, and he learned Rust along the way by reading the diffs.&lt;/p&gt;
&lt;p&gt;This won&amp;#39;t replace all programming. But for people who know what they want to build and not necessarily how to build it, this workflow clearly works.&lt;/p&gt;
&lt;p&gt;I remain cautious. But my caution now has a different shape. I&amp;#39;m less worried about whether agents &lt;em&gt;can&lt;/em&gt; produce good code, and more worried about whether developers will put in the work to make them produce good code. The AGENTS.md file, the detailed specs, the manual review, the integrity checks: that&amp;#39;s a lot of discipline. &amp;quot;Vibe coding&amp;quot; is easier. I &lt;a href=&quot;/posts/why-dumb-agents-are-winning-shell-first-ai/&quot;&gt;argued before&lt;/a&gt; that I trust simple agents more because I can see what they&amp;#39;re doing. Woolf&amp;#39;s workflow is the opposite of simple -- it&amp;#39;s elaborate and deliberate -- but it shares the same core principle: transparency. Every prompt is in git. Every benchmark is reproducible. Every optimization is auditable.&lt;/p&gt;
&lt;p&gt;And that&amp;#39;s the real problem. The people getting the best results from agents are the ones who were already good engineers. The gap isn&amp;#39;t closing. It might be widening.&lt;/p&gt;
&lt;p&gt;Manage your agents. Or they&amp;#39;ll manage your codebase into the ground.&lt;/p&gt;
</content:encoded></item><item><title>How to lock yourself out of SSH with a single scp command</title><link>https://rezhajul.io/posts/2026-disable-ssh-access-with-scp/</link><guid isPermaLink="true">https://rezhajul.io/posts/2026-disable-ssh-access-with-scp/</guid><description>One scp command to copy some files, and suddenly you can&apos;t SSH into your own server anymore.</description><pubDate>Tue, 03 Mar 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Ever locked yourself out of your own server just by copying files with &lt;code&gt;scp&lt;/code&gt;? That&amp;#39;s exactly what happened to an engineer who wrote about it on &lt;em&gt;sny.sh&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Here&amp;#39;s what went down. They wanted to transfer a local folder to the home directory on their server:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;scp -r . host:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Looks harmless, right? The problem was that the local folder had permissions set to &lt;code&gt;rwxrwxrwx&lt;/code&gt;, or &lt;strong&gt;777&lt;/strong&gt; (probably left over from testing something).&lt;/p&gt;
&lt;p&gt;Turns out &lt;code&gt;scp&lt;/code&gt; from OpenSSH doesn&amp;#39;t just copy files. It also changes the target folder&amp;#39;s permissions to match the source. So the home directory (&lt;code&gt;/home/user&lt;/code&gt;) on the server got set to 777.&lt;/p&gt;
&lt;p&gt;And that&amp;#39;s when everything broke. Next SSH login attempt, &lt;code&gt;sshd&lt;/code&gt; flat out refused:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;Authentication refused: bad ownership or modes for directory /home/user
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;OpenSSH is strict about this. If your home directory or &lt;code&gt;.ssh&lt;/code&gt; folder is world-readable (777), it won&amp;#39;t accept public key authentication. It&amp;#39;s a security measure, and it makes sense, but it sure doesn&amp;#39;t feel great when you&amp;#39;re the one locked out.&lt;/p&gt;
&lt;p&gt;Luckily, this person still had WebDAV access to the server and could fix the home directory permissions back to &lt;code&gt;700&lt;/code&gt; (&lt;code&gt;rwx------&lt;/code&gt;). Without that, the server would&amp;#39;ve been toast, especially if it&amp;#39;s a headless Raspberry Pi with no other way in.&lt;/p&gt;
&lt;p&gt;Watch out when using &lt;code&gt;scp -r&lt;/code&gt;, especially when copying directly into your home directory. This behavior has been reported and should be fixed in OpenSSH 10.3.&lt;/p&gt;
</content:encoded></item><item><title>Compound Engineering</title><link>https://rezhajul.io/posts/compound-engineering/</link><guid isPermaLink="true">https://rezhajul.io/posts/compound-engineering/</guid><description>Most codebases rot over time. Compound engineering is the idea that every unit of work should make the next one easier, not harder.</description><pubDate>Mon, 02 Mar 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Most codebases get worse over time. You add a feature, and next month that feature makes the next one harder to build. After a few years, the team spends more time fighting the system than building on it. Everyone has lived through this.&lt;/p&gt;
&lt;p&gt;I&amp;#39;ve been thinking about a different approach lately. I&amp;#39;m calling it compound engineering: the idea that every unit of work should make the next unit easier. Bug fixes should eliminate entire categories of future bugs. Patterns you discover should become tools you reuse. The codebase should get easier to work with over time, not harder.&lt;/p&gt;
&lt;h2&gt;The loop&lt;/h2&gt;
&lt;p&gt;The whole thing runs on a four-step loop:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Plan → Work → Review → Compound → Repeat&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The first three steps are familiar to anyone who writes software. You figure out what to build, you build it, you check if it&amp;#39;s correct. Nothing new there.&lt;/p&gt;
&lt;p&gt;The fourth step is where the gains pile up. Skip it, and you&amp;#39;ve done regular engineering with AI tools. Do it, and you start accumulating knowledge that pays dividends.&lt;/p&gt;
&lt;h2&gt;Plan (where most of the thinking happens)&lt;/h2&gt;
&lt;p&gt;Planning is the step people underestimate the most. I&amp;#39;ve found that planning and review should take up roughly 80% of your time. Work and compounding fill the remaining 20%.&lt;/p&gt;
&lt;p&gt;That sounds extreme until you realize that a good plan means the AI agent can implement without much supervision. A bad plan means you&amp;#39;re babysitting every line of code. The plan is now the most important artifact you produce, more important than the code itself.&lt;/p&gt;
&lt;p&gt;What does planning look like? You understand the requirement, you research how similar things work in the codebase, you check the framework docs, you design the approach, and you poke holes in your own plan before handing it off.&lt;/p&gt;
&lt;h2&gt;Work (the agent writes code)&lt;/h2&gt;
&lt;p&gt;Once the plan exists, execution is relatively mechanical. The agent implements step by step. You run tests and linting after each change. You track what&amp;#39;s done and what&amp;#39;s left.&lt;/p&gt;
&lt;p&gt;If you trust the plan, there&amp;#39;s no need to watch every line of code. This is where most developers get stuck. They&amp;#39;ve been trained to review everything line by line, and letting go feels irresponsible. But if the plan is solid and you have tests, the risk is low.&lt;/p&gt;
&lt;h2&gt;Review (catch problems, capture lessons)&lt;/h2&gt;
&lt;p&gt;Review catches issues before they ship. But more importantly, review is where you notice patterns. What went wrong? What category of bug was this? Could the system have caught it automatically?&lt;/p&gt;
&lt;p&gt;One approach I&amp;#39;ve been experimenting with is running multiple specialized review agents in parallel, each looking at a different angle: security, performance, data integrity, architecture. Everything gets combined into a prioritized list. P1 issues get fixed immediately, P2 issues should be fixed, P3 items are nice-to-haves.&lt;/p&gt;
&lt;p&gt;You don&amp;#39;t need a fleet of review agents to do this. The principle works at any scale. After any piece of work, ask: what did I learn here that I could write down?&lt;/p&gt;
&lt;h2&gt;Compound (the step nobody does)&lt;/h2&gt;
&lt;p&gt;This is the step that separates compound engineering from &amp;quot;engineering with AI tools.&amp;quot; After you finish a piece of work, you ask:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What worked?&lt;/li&gt;
&lt;li&gt;What didn&amp;#39;t?&lt;/li&gt;
&lt;li&gt;What&amp;#39;s the reusable insight?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Then you write it down somewhere the system can find it next time. In my setup, this means updating &lt;code&gt;AGENTS.md&lt;/code&gt; (the file the agent reads at the start of every session) with new patterns, creating specialized agents when warranted, and storing solved problems as searchable documentation.&lt;/p&gt;
&lt;p&gt;Future sessions should find past solutions automatically. If you figured out how authentication works, you document it once, and nobody has to ask you again. The knowledge belongs to the system, not to any individual.&lt;/p&gt;
&lt;h2&gt;The adoption ladder&lt;/h2&gt;
&lt;p&gt;Not everyone jumps straight to fully autonomous agents. I think about it in five stages:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stage 0:&lt;/strong&gt; You write everything by hand. This built great software for decades, but it&amp;#39;s slow by 2025 standards.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stage 1:&lt;/strong&gt; You use ChatGPT or Claude as a search engine with better answers. Copy-paste what&amp;#39;s useful. You&amp;#39;re still in full control.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stage 2:&lt;/strong&gt; You let AI tools make changes directly in your codebase, but you review every line. Most developers plateau here.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stage 3:&lt;/strong&gt; You create a detailed plan, let the agent implement it without supervision, and review the pull request. This is where compound engineering starts to work.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stage 4:&lt;/strong&gt; You describe what you want, and the agent handles everything from research to PR creation. You review and merge.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stage 5:&lt;/strong&gt; You run multiple agents in parallel on cloud infrastructure, reviewing PRs as they come in. You&amp;#39;re directing a fleet.&lt;/p&gt;
&lt;h2&gt;Beliefs worth reconsidering&lt;/h2&gt;
&lt;p&gt;A few assumptions that I think get in the way:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;The code must be written by hand.&amp;quot;&lt;/strong&gt; The actual requirement is correct, maintainable code that solves the right problem. Who typed it doesn&amp;#39;t matter.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;First attempts should be good.&amp;quot;&lt;/strong&gt; In my experience, first attempts have a 95% garbage rate. Second attempts are still 50%. This isn&amp;#39;t failure, it&amp;#39;s iteration. The goal is to get to attempt three faster than you used to get to attempt one.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;Code is self-expression.&amp;quot;&lt;/strong&gt; This one stings, but letting go of attachment to code means you take feedback better, refactor without flinching, and skip arguments about style.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&amp;quot;More typing equals more learning.&amp;quot;&lt;/strong&gt; The developer who reviews ten AI implementations understands more patterns than the one who hand-typed two. Understanding matters more than muscle memory.&lt;/p&gt;
&lt;h2&gt;The 50/50 rule&lt;/h2&gt;
&lt;p&gt;Traditional teams spend 90% of their time on features and 10% on everything else. I think the right split is closer to 50/50: half your time building features, half improving the system.&lt;/p&gt;
&lt;p&gt;An hour building a review agent saves ten hours of review over the next year. A test generator saves weeks of manual test writing. System improvements make work faster. Feature work doesn&amp;#39;t compound in the same way.&lt;/p&gt;
&lt;h2&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;You don&amp;#39;t need a fancy multi-agent review system to benefit from this thinking. The core idea is dead simple: after you finish any piece of work, spend a few minutes writing down what you learned in a place where your future self (or your AI tools) can find it.&lt;/p&gt;
&lt;p&gt;Plan carefully, let the tools do the typing, review for substance, and write down what you learned. Each cycle gets a little faster than the last.&lt;/p&gt;
&lt;p&gt;The hard part isn&amp;#39;t the process. The hard part is the emotional adjustment. Letting go of line-by-line review. Trusting the plan. Accepting that code you didn&amp;#39;t type is still your responsibility. That takes time, and it&amp;#39;s okay to work through it at your own pace.&lt;/p&gt;
</content:encoded></item><item><title>28 Posts in 28 Days: What Happened When I Wrote Every Day in February</title><link>https://rezhajul.io/posts/2026-28-posts-in-28-days/</link><guid isPermaLink="true">https://rezhajul.io/posts/2026-28-posts-in-28-days/</guid><description>I published a blog post every day in February. Some were good, some were filler, and I learned more about my own writing habits than I expected.</description><pubDate>Sun, 01 Mar 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I didn&amp;#39;t plan this. On February 1st I published a post about &lt;a href=&quot;/posts/2026-reducing-openclaw-heartbeat-token-usage&quot;&gt;reducing OpenClaw&amp;#39;s heartbeat token usage&lt;/a&gt; because it was fresh in my head. Then I wrote another one the next day. By February 5th I realized I had a streak going and thought: okay, let&amp;#39;s see if I can do the whole month.&lt;/p&gt;
&lt;p&gt;February has 28 days. That felt like a dare.&lt;/p&gt;
&lt;h2&gt;What I actually wrote about&lt;/h2&gt;
&lt;p&gt;Looking back, the posts cluster into a few buckets. The biggest one is AI and agents. I wrote about &lt;a href=&quot;/posts/2026-reality-ai-pair-programming-management&quot;&gt;AI pair programming being more management than magic&lt;/a&gt;, about &lt;a href=&quot;/posts/2026-why-dumb-agents-are-winning-shell-first-ai&quot;&gt;shell-first agents&lt;/a&gt; outperforming fancy ones, about &lt;a href=&quot;/posts/2026-why-i-moved-my-ai-brain-to-homelab&quot;&gt;moving my AI setup to a homelab&lt;/a&gt;, about &lt;a href=&quot;/posts/2026-how-i-built-my-own-ai-news-anchor&quot;&gt;building my own AI news anchor&lt;/a&gt;, and about &lt;a href=&quot;/posts/2026-next-level-coding-with-amp&quot;&gt;planning before prompting&lt;/a&gt; when using Amp. This is where my head has been lately, so no surprise there.&lt;/p&gt;
&lt;p&gt;Then there&amp;#39;s the five-part blog redesign series. I &lt;a href=&quot;/posts/2026-redesigning-my-blog-the-audit&quot;&gt;audited what was wrong&lt;/a&gt;, &lt;a href=&quot;/posts/2026-redesigning-my-blog-the-blogroll&quot;&gt;added a blogroll&lt;/a&gt;, &lt;a href=&quot;/posts/2026-redesigning-my-blog-terminal-blueprint&quot;&gt;rebuilt the theme from scratch&lt;/a&gt;, &lt;a href=&quot;/posts/2026-redesigning-my-blog-the-small-stuff&quot;&gt;sweated the small stuff&lt;/a&gt;, and &lt;a href=&quot;/posts/2026-redesigning-my-blog-what-i-learned&quot;&gt;wrote up what I learned&lt;/a&gt;. Five posts on one redesign sounds excessive, but each part stood on its own and people seemed to find different parts useful.&lt;/p&gt;
&lt;p&gt;Then there&amp;#39;s the day job stuff -- data engineering, backend, DevOps. &lt;a href=&quot;/posts/2026-the-lazy-sysadmins-guide-to-docker&quot;&gt;Docker maintenance&lt;/a&gt;, &lt;a href=&quot;/posts/2026-managing-dns-as-code-with-dnscontrol&quot;&gt;DNS as code&lt;/a&gt;, &lt;a href=&quot;/posts/2026-upgrading-postgresql-18-docker&quot;&gt;upgrading four PostgreSQL instances&lt;/a&gt;, &lt;a href=&quot;/posts/2026-debugging-airflow-executor-reports-failed&quot;&gt;debugging an Airflow executor error&lt;/a&gt;, &lt;a href=&quot;/posts/2026-chasing-a-transitive-dependency-vulnerability&quot;&gt;chasing a transitive dependency vulnerability&lt;/a&gt;. This is what I actually do all day, so writing about it came naturally.&lt;/p&gt;
&lt;p&gt;And then there&amp;#39;s everything else. A &lt;a href=&quot;/posts/2026-building-pythonid-bot-telegram-group-management&quot;&gt;Telegram bot for 24,000 developers&lt;/a&gt;, a &lt;a href=&quot;/posts/2026-building-almanac-games-psn-scraper&quot;&gt;PSN game scraper&lt;/a&gt;, &lt;a href=&quot;/posts/2026-forget-notion-manage-house-in-terminal&quot;&gt;managing my house in the terminal&lt;/a&gt;, &lt;a href=&quot;/posts/2026-digital-hoarder-why-i-self-host&quot;&gt;why I self-host everything&lt;/a&gt;. I even wrote about &lt;a href=&quot;/posts/2026-you-cant-stack-overflow-a-deadlift&quot;&gt;deadlifts&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Some days had two posts. Those weren&amp;#39;t planned either -- I just had more to say.&lt;/p&gt;
&lt;h2&gt;The hard part&lt;/h2&gt;
&lt;p&gt;Around day 12 I hit a wall. I&amp;#39;d used up all the easy topics -- the stuff I&amp;#39;d been meaning to write about for weeks. From that point on, every morning started with &amp;quot;what am I going to write today?&amp;quot; which is a bad question to wake up to.&lt;/p&gt;
&lt;p&gt;What helped was lowering my standards. Not every post needs to be a deep dive. Some of my February posts are short. The &lt;a href=&quot;/posts/2026-yak-shaving-art-of-getting-lost&quot;&gt;yak shaving one&lt;/a&gt; is basically a long anecdote with a point at the end. The &lt;a href=&quot;/posts/2026-stop-using-innerhtml&quot;&gt;innerHTML post&lt;/a&gt; is a reaction to a Firefox feature. Those are fine. They&amp;#39;re blog posts, not dissertations.&lt;/p&gt;
&lt;p&gt;The other thing that helped was writing about whatever I was doing that day. The Airflow post came from debugging at work. The PostgreSQL post came from a weekend upgrade. The clap button posts (yes, &lt;a href=&quot;/posts/2026-adding-interactive-claps-to-static-blog&quot;&gt;two&lt;/a&gt; &lt;a href=&quot;/posts/2026-over-engineering-a-clap-button-again&quot;&gt;of&lt;/a&gt; them) came from actually building and then over-engineering the feature on this blog. Writing about what you&amp;#39;re already doing cuts the prep time in half.&lt;/p&gt;
&lt;h2&gt;What I noticed&lt;/h2&gt;
&lt;p&gt;My writing got faster. The first few posts took me two or three hours each. By the end of the month I could get a draft out in about an hour. The editing still took time, but getting words on the screen became less painful.&lt;/p&gt;
&lt;p&gt;I also noticed that I write better when I&amp;#39;m not trying to be comprehensive. My best posts from this month are the ones where I pick one specific thing and talk about it. The worst ones are where I tried to cover too much ground.&lt;/p&gt;
&lt;p&gt;The streak also changed how I consume information. I started reading blog posts, release notes, and documentation with a filter running in the background: &amp;quot;is there a post in this?&amp;quot; Sometimes there was. The &lt;a href=&quot;/posts/2026-go-1-26-stack-vs-heap-allocation&quot;&gt;Go 1.26 post&lt;/a&gt; on the last day of February came from reading Keith Randall&amp;#39;s Go blog post and wanting to explain the allocation changes to myself.&lt;/p&gt;
&lt;h2&gt;Am I going to keep doing this?&lt;/h2&gt;
&lt;p&gt;No. 28 days was enough. Daily publishing is a useful exercise but a bad habit. Some of these posts needed another day of editing. A few could have been combined into longer, better pieces. The pressure to publish something every day occasionally won over the desire to publish something good.&lt;/p&gt;
&lt;p&gt;I&amp;#39;ll keep writing, probably two or three posts a week. The daily habit proved I have enough material. Now I want to be pickier about what gets published and spend more time on each piece.&lt;/p&gt;
&lt;p&gt;If you&amp;#39;re thinking about trying a daily writing challenge: do it for a month, not longer. The first week is easy, the second week is hard, and the third week is when you actually start learning things about how you write. By the fourth week you&amp;#39;re ready to stop, and that&amp;#39;s the right time.&lt;/p&gt;
&lt;p&gt;29 posts. 28 days. Some good, some okay. On to March.&lt;/p&gt;
</content:encoded></item><item><title>Go 1.26: Stack vs Heap Allocation — Faster Code Without Changing a Line</title><link>https://rezhajul.io/posts/2026-go-1-26-stack-vs-heap-allocation/</link><guid isPermaLink="true">https://rezhajul.io/posts/2026-go-1-26-stack-vs-heap-allocation/</guid><description>Go 1.26 ships with compiler optimizations that move more slice allocations from the heap to the stack. Your existing code gets faster without a refactor.</description><pubDate>Sat, 28 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Keith Randall just published &lt;a href=&quot;https://go.dev/blog/allocation-optimizations&quot;&gt;Allocating on the Stack&lt;/a&gt; on the Go blog. If you&amp;#39;ve ever profiled a Go service and seen a pile of tiny allocations from &lt;code&gt;append&lt;/code&gt; growing a slice from zero, this one is worth reading.&lt;/p&gt;
&lt;p&gt;The gist: the Go 1.25 and 1.26 compilers can now keep slice backing stores on the stack in more situations than before.&lt;/p&gt;
&lt;h2&gt;What actually changed&lt;/h2&gt;
&lt;p&gt;The improvements landed in two stages.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Go 1.25&lt;/strong&gt; tackled &lt;code&gt;make&lt;/code&gt; with variable sizes. Previously, &lt;code&gt;make([]T, 0, n)&lt;/code&gt; could only be stack-allocated if &lt;code&gt;n&lt;/code&gt; was a compile-time constant. Now the compiler provisions a small 32-byte buffer on the stack and uses it when the requested size fits. So &lt;code&gt;make([]int, 0, someVar)&lt;/code&gt; can avoid the heap entirely if &lt;code&gt;someVar&lt;/code&gt; is small enough.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Go 1.26&lt;/strong&gt; goes further in two ways.&lt;/p&gt;
&lt;p&gt;First, &lt;code&gt;append&lt;/code&gt; on a nil or empty slice now uses a stack-allocated backing store as its first allocation. No &lt;code&gt;make&lt;/code&gt; with a capacity hint needed. How much fits depends on the element size: a 32-byte buffer holds four 8-byte values, eight 4-byte values, and so on. If the slice stays small, it never touches the heap.&lt;/p&gt;
&lt;p&gt;Second, slices that escape the function (returned to the caller) can still benefit. The compiler injects a &lt;code&gt;runtime.move2heap&lt;/code&gt; call right before the return. If the slice already grew beyond the stack buffer, the call is a no-op. If it&amp;#39;s still on the stack, it copies to a single, right-sized heap allocation at the last moment. This is the part I find most interesting, because it actually beats hand-written code. A manual &lt;code&gt;copy&lt;/code&gt; + &lt;code&gt;return&lt;/code&gt; pattern always pays for the final allocation and copy. The compiler only does it when the slice is still stack-backed.&lt;/p&gt;
&lt;h2&gt;Why this matters in practice&lt;/h2&gt;
&lt;p&gt;I&amp;#39;ve worked on Go services where &lt;code&gt;append&lt;/code&gt;-heavy hot paths showed up in allocation profiles: building slices of results from database queries, collecting validation errors, assembling middleware chains. The kind of code where the slice is usually small but you don&amp;#39;t want to hardcode a capacity.&lt;/p&gt;
&lt;p&gt;The usual fix was sprinkling &lt;code&gt;make([]T, 0, expectedSize)&lt;/code&gt; everywhere, which works but adds noise. These compiler changes make that unnecessary for the common case.&lt;/p&gt;
&lt;p&gt;Go 1.26 also enables the &lt;a href=&quot;https://go.dev/doc/go1.26#runtime&quot;&gt;Green Tea garbage collector&lt;/a&gt; by default (it was experimental in 1.25). The Go team reports 10-40% less GC overhead for allocation-heavy programs. Combined with fewer objects landing on the heap in the first place, the two changes reinforce each other.&lt;/p&gt;
&lt;h2&gt;The catch&lt;/h2&gt;
&lt;p&gt;If the new allocation patterns expose existing &lt;code&gt;unsafe.Pointer&lt;/code&gt; misuse or other correctness issues, you can disable the optimization:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;go build -gcflags=all=-d=variablemakehash=n
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;a href=&quot;https://pkg.go.dev/golang.org/x/tools/cmd/bisect&quot;&gt;bisect tool&lt;/a&gt; with &lt;code&gt;-compile=variablemake&lt;/code&gt; can narrow down which allocation is causing trouble.&lt;/p&gt;
&lt;h2&gt;Bottom line&lt;/h2&gt;
&lt;p&gt;Upgrade to Go 1.26, rebuild, and check your allocation profiles. I&amp;#39;d be surprised if you don&amp;#39;t see fewer heap allocations.&lt;/p&gt;
&lt;p&gt;The full blog post with code examples: &lt;a href=&quot;https://go.dev/blog/allocation-optimizations&quot;&gt;Allocating on the Stack&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Sharpen the Saw: Don&apos;t Debug in the Dark</title><link>https://rezhajul.io/posts/2026-sharpen-the-saw/</link><guid isPermaLink="true">https://rezhajul.io/posts/2026-sharpen-the-saw/</guid><description>Sometimes the fastest way to fix a bug is to stop fixing it and fix your tools instead. A lesson on sharpening the saw.</description><pubDate>Fri, 27 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I recently read a post called &lt;a href=&quot;https://ochagavia.nl/blog/fix-your-tools/&quot;&gt;Fix Your Tools&lt;/a&gt; by Adolfo Ochagavía. It hit close to home because of a bad habit I think we all share as developers.&lt;/p&gt;
&lt;h2&gt;The Tunnel Vision Trap&lt;/h2&gt;
&lt;p&gt;The scenario is classic: You&amp;#39;re hunting a gnarly bug. You&amp;#39;re in &amp;quot;problem-solving mode.&amp;quot; You try to set a breakpoint, but your debugger ignores it.&lt;/p&gt;
&lt;p&gt;What do you do?&lt;/p&gt;
&lt;p&gt;If you&amp;#39;re like most of us in the heat of the moment, you ignore the broken debugger. You think, &lt;em&gt;&amp;quot;I don&amp;#39;t have time to fix my tools, I have a bug to fix!&amp;quot;&lt;/em&gt; So you switch to &lt;code&gt;console.log&lt;/code&gt; or print statements, cluttering your code and guessing at the state.&lt;/p&gt;
&lt;p&gt;You spend hours chasing shadows because you&amp;#39;re debugging in the dark.&lt;/p&gt;
&lt;h2&gt;Sharpen the Saw&lt;/h2&gt;
&lt;p&gt;Adolfo&amp;#39;s realization was simple: &lt;strong&gt;Fix the darn debugger.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In his case, it was a one-line configuration change. Once he fixed his tool, he could see exactly what was happening, and the original bug was solved in minutes.&lt;/p&gt;
&lt;p&gt;This is the essence of Stephen Covey&amp;#39;s 7th habit: &lt;strong&gt;Sharpen the Saw&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;If you&amp;#39;re trying to cut down a tree with a dull saw, it takes forever. Stopping to sharpen the saw feels like a delay—&lt;em&gt;&amp;quot;I&amp;#39;m busy sawing!&amp;quot;&lt;/em&gt;—but in reality, it&amp;#39;s the only way to speed up.&lt;/p&gt;
&lt;h2&gt;When to Stop and Fix&lt;/h2&gt;
&lt;p&gt;It&amp;#39;s tricky to know when you&amp;#39;re &amp;quot;sharpening the saw&amp;quot; versus falling into a rabbit hole of &amp;quot;yak shaving&amp;quot; (endlessly tweaking configs without doing real work).&lt;/p&gt;
&lt;p&gt;Here&amp;#39;s my rule of thumb:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;If the tool is &lt;em&gt;actively&lt;/em&gt; fighting you (linter crashing, debugger skipping, tests flaky), fix it immediately. You&amp;#39;re losing more time working around it.&lt;/li&gt;
&lt;li&gt;If the manual workaround is painful, like deploying via FTP instead of spending an hour on a CI script, fix it the second time it annoys you.&lt;/li&gt;
&lt;li&gt;If you&amp;#39;re just bored and reaching for a new Neovim color scheme instead of coding, that&amp;#39;s procrastination. Get back to work.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Next time your tools glitch while you&amp;#39;re working, resist the urge to power through with a workaround. Take a breath. Fix the tool. Turn the lights back on.&lt;/p&gt;
&lt;p&gt;Debugging is hard enough. Don&amp;#39;t do it in the dark.&lt;/p&gt;
</content:encoded></item><item><title>Stop Using innerHTML: The New Firefox Feature That Kills XSS</title><link>https://rezhajul.io/posts/stop-using-innerhtml-firefox-sethtml/</link><guid isPermaLink="true">https://rezhajul.io/posts/stop-using-innerhtml-firefox-sethtml/</guid><description>Firefox 148 just shipped the new Sanitizer API with setHTML(). Here&apos;s why you should stop using innerHTML today.</description><pubDate>Thu, 26 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We&amp;#39;ve all done it. You get some HTML string from an API (or worse, user input), and you need to render it on the screen. The quickest, dirtiest, and most common way? Good old &lt;code&gt;innerHTML&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;// The classic footgun
const userBio = getUserInput(); 
document.getElementById(&amp;#39;profile&amp;#39;).innerHTML = userBio;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It works instantly, but it&amp;#39;s also the primary reason Cross-Site Scripting (XSS) vulnerabilities have plagued the web for two decades. If &lt;code&gt;userBio&lt;/code&gt; contains &lt;code&gt;&amp;lt;script&amp;gt;stealCookies()&amp;lt;/script&amp;gt;&lt;/code&gt; or &lt;code&gt;&amp;lt;img src=&amp;quot;x&amp;quot; onerror=&amp;quot;alert(&amp;#39;hacked&amp;#39;)&amp;quot;&amp;gt;&lt;/code&gt;, your app just executed malicious code.&lt;/p&gt;
&lt;p&gt;For years, the solution was to bring in heavy third-party libraries like DOMPurify. But as of Firefox 148, the browser finally gives us a native, built-in solution: &lt;strong&gt;The Sanitizer API and &lt;code&gt;setHTML()&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;Enter &lt;code&gt;setHTML()&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;The new &lt;code&gt;setHTML()&lt;/code&gt; method is essentially a drop-in replacement for &lt;code&gt;innerHTML&lt;/code&gt;, but with one important difference: it runs the HTML string through a native sanitizer &lt;em&gt;before&lt;/em&gt; it gets attached to the DOM.&lt;/p&gt;
&lt;p&gt;Instead of this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;el.innerHTML = dirtyString;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You now do this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;el.setHTML(dirtyString);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&amp;#39;s it. No matter what sanitizer config you pass, &lt;code&gt;setHTML()&lt;/code&gt; will &lt;em&gt;always&lt;/em&gt; strip out XSS-unsafe elements (&lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;iframe&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;frame&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;embed&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;object&amp;gt;&lt;/code&gt;, and SVG &lt;code&gt;&amp;lt;use&amp;gt;&lt;/code&gt;) and inline event handlers (like &lt;code&gt;onclick&lt;/code&gt; or &lt;code&gt;onerror&lt;/code&gt;). Even if you explicitly allow &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; in your config, it still gets removed. Safe by default, no exceptions.&lt;/p&gt;
&lt;p&gt;But the default sanitizer goes further than just blocking scripts. It also strips out elements like &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;style&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;form&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;input&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;button&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;video&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;template&amp;gt;&lt;/code&gt;, custom elements, &lt;code&gt;data-&lt;/code&gt; attributes, and more. Basically, if it&amp;#39;s not a simple content element (headings, paragraphs, lists, etc.), it&amp;#39;s gone.&lt;/p&gt;
&lt;h3&gt;How does it handle malicious input?&lt;/h3&gt;
&lt;p&gt;Let&amp;#39;s look at what happens when an attacker tries to inject something nasty:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const attackerPayload = `
  &amp;lt;h1&amp;gt;Hello!&amp;lt;/h1&amp;gt;
  &amp;lt;script&amp;gt;alert(&amp;#39;Stealing tokens...&amp;#39;);&amp;lt;/script&amp;gt;
  &amp;lt;img src=&amp;quot;cute-cat.jpg&amp;quot; onload=&amp;quot;sendData()&amp;quot;&amp;gt;
  &amp;lt;a href=&amp;quot;javascript:evil()&amp;quot;&amp;gt;Click me&amp;lt;/a&amp;gt;
`;

const container = document.getElementById(&amp;#39;content&amp;#39;);
container.setHTML(attackerPayload);

console.log(container.innerHTML);
// Output:
// &amp;lt;h1&amp;gt;Hello!&amp;lt;/h1&amp;gt;
// &amp;lt;a&amp;gt;Click me&amp;lt;/a&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice what happened:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; tag was stripped out entirely.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; was removed completely — the default sanitizer doesn&amp;#39;t allow it.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;href=&amp;quot;javascript:...&amp;quot;&lt;/code&gt; was stripped from the &lt;code&gt;&amp;lt;a&amp;gt;&lt;/code&gt; tag.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you want the most permissive mode — only strip XSS-unsafe elements, keep everything else — pass an empty config:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;container.setHTML(attackerPayload, { sanitizer: {} });
// Output:
// &amp;lt;h1&amp;gt;Hello!&amp;lt;/h1&amp;gt;
// &amp;lt;img src=&amp;quot;cute-cat.jpg&amp;quot;&amp;gt;
// &amp;lt;a&amp;gt;Click me&amp;lt;/a&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now the &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; stays, but the &lt;code&gt;onload&lt;/code&gt; handler and &lt;code&gt;javascript:&lt;/code&gt; URL are still gone. You can&amp;#39;t sneak XSS through &lt;code&gt;setHTML()&lt;/code&gt;, period.&lt;/p&gt;
&lt;p&gt;All of this happens natively inside the browser engine, without needing to ship a JavaScript sanitizer library to your users.&lt;/p&gt;
&lt;h2&gt;Can we customize the Sanitizer?&lt;/h2&gt;
&lt;p&gt;Yes! You can build your config two ways: as an &lt;em&gt;allow list&lt;/em&gt; (only these elements get through) or a &lt;em&gt;remove list&lt;/em&gt; (everything except these gets through):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;// Allow list: only allow these elements
const mySanitizer = new Sanitizer({
  elements: [&amp;#39;p&amp;#39;, &amp;#39;em&amp;#39;, &amp;#39;strong&amp;#39;, &amp;#39;a&amp;#39;],
  attributes: [&amp;#39;href&amp;#39;]
});

el.setHTML(dirtyString, { sanitizer: mySanitizer });

// Remove list: allow everything except these
const strictSanitizer = new Sanitizer({
  removeElements: [&amp;#39;img&amp;#39;, &amp;#39;table&amp;#39;, &amp;#39;style&amp;#39;]
});

el.setHTML(dirtyString, { sanitizer: strictSanitizer });
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can also use the &lt;code&gt;Sanitizer&lt;/code&gt; object&amp;#39;s methods to build configs programmatically:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const sanitizer = new Sanitizer({});
sanitizer.allowElement(&amp;#39;p&amp;#39;);
sanitizer.allowElement(&amp;#39;em&amp;#39;);
sanitizer.allowAttribute(&amp;#39;href&amp;#39;);
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Why not just keep using DOMPurify?&lt;/h2&gt;
&lt;p&gt;DOMPurify works great and has been the go-to for years. But moving this responsibility to the browser has real advantages:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Zero Bundle Size:&lt;/strong&gt; You ship less JavaScript.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Performance:&lt;/strong&gt; Native browser code is almost always faster than parsing and mutating the DOM via JavaScript.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Always Up-to-Date:&lt;/strong&gt; Browsers update automatically to patch new, obscure XSS vectors. You don&amp;#39;t have to worry about bumping your npm dependencies every time a new bypass is discovered.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Context Awareness:&lt;/strong&gt; The browser inherently understands its own DOM parsing quirks better than a polyfill can.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Browser Support &amp;amp; The Path Forward&lt;/h2&gt;
&lt;p&gt;Right now, Firefox 148 is the first browser to ship this enabled by default (released February 24, 2026). Chrome has an implementation available in Canary behind a flag. Safari hasn&amp;#39;t started implementation yet, though the WebKit team has expressed a &lt;a href=&quot;https://github.com/WebKit/standards-positions/issues/86&quot;&gt;positive position&lt;/a&gt; on the spec.&lt;/p&gt;
&lt;p&gt;One thing worth noting: &lt;code&gt;setHTMLUnsafe()&lt;/code&gt; — the version that doesn&amp;#39;t enforce XSS-safety — already has &lt;a href=&quot;https://caniuse.com/mdn-api_element_sethtmlunsafe&quot;&gt;cross-browser support&lt;/a&gt; since 2024. So the unsafe counterpart is available now, and the safe version is catching up.&lt;/p&gt;
&lt;p&gt;So, what should you do today?&lt;/p&gt;
&lt;p&gt;If you&amp;#39;re building modern web apps, start keeping an eye on your &lt;code&gt;innerHTML&lt;/code&gt; usage. You can&amp;#39;t safely switch 100% of your codebase to &lt;code&gt;setHTML()&lt;/code&gt; in production &lt;em&gt;just yet&lt;/em&gt;, but the days of blindly dumping strings into the DOM are numbered.&lt;/p&gt;
&lt;h2&gt;Bonus: Trusted Types and &lt;code&gt;Document.parseHTML()&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;Firefox 148 also ships two related features worth knowing about:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;Document.parseHTML()&lt;/code&gt;&lt;/strong&gt; is a companion method that parses an HTML string into a full &lt;code&gt;Document&lt;/code&gt; object (instead of injecting into an existing element). Same sanitization rules apply — XSS-unsafe content is always stripped.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Trusted Types&lt;/strong&gt; is a separate API that lets you lock down &lt;em&gt;all&lt;/em&gt; dangerous sinks (&lt;code&gt;innerHTML&lt;/code&gt;, &lt;code&gt;outerHTML&lt;/code&gt;, &lt;code&gt;document.write&lt;/code&gt;, etc.) at the CSP level. You set a &lt;code&gt;Content-Security-Policy&lt;/code&gt; header:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Content-Security-Policy: require-trusted-types-for &amp;#39;script&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After that, passing a raw string to &lt;code&gt;innerHTML&lt;/code&gt; throws a &lt;code&gt;TypeError&lt;/code&gt;. You have to go through a policy function first. Combined with &lt;code&gt;setHTML()&lt;/code&gt;, you get defense in depth: Trusted Types forces developers through audited code paths, and &lt;code&gt;setHTML()&lt;/code&gt; makes sure the output is safe regardless.&lt;/p&gt;
&lt;p&gt;If you want to play with the API right now, Mozilla has a &lt;a href=&quot;https://sanitizer-api.dev/&quot;&gt;Sanitizer API playground&lt;/a&gt; where you can test different configs and see what gets stripped.&lt;/p&gt;
&lt;p&gt;Once the other browsers catch up, there won&amp;#39;t be a good reason to reach for &lt;code&gt;innerHTML&lt;/code&gt; again.&lt;/p&gt;
</content:encoded></item><item><title>Next Level Coding with Amp: Planning Before Prompting</title><link>https://rezhajul.io/posts/2026-next-level-coding-with-amp/</link><guid isPermaLink="true">https://rezhajul.io/posts/2026-next-level-coding-with-amp/</guid><description>My experience using Amp, the frontier coding agent. It rewards structured planning and works best as a &apos;second brain&apos; in your terminal.</description><pubDate>Wed, 25 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I recently started using Amp, and it feels different from any AI coding tool I&amp;#39;ve used before. It&amp;#39;s a coding agent that lives in your terminal (and editor), and the way you work with it matters more than anything it can autocomplete.&lt;/p&gt;
&lt;h2&gt;The real shift: planning vs. execution&lt;/h2&gt;
&lt;p&gt;The biggest realization I had while using Amp (and reading about others&amp;#39; workflows, like &lt;a href=&quot;https://boristane.com/blog/how-i-use-claude-code/&quot;&gt;Boris Tane&amp;#39;s&lt;/a&gt;) is this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Do not let the AI write code until you have a plan.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;We often treat AI like a magic wand, throw a vague prompt at it and hope for the best. But that leads to hallucinations, wrong architecture choices, and wasted tokens.&lt;/p&gt;
&lt;p&gt;With Amp, the workflow shifts. You become a technical architect first, and a coder second.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Research&lt;/strong&gt;: I ask Amp to read my codebase deeply. It doesn&amp;#39;t skim; it understands the context.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Planning&lt;/strong&gt;: I ask it to write a &lt;code&gt;plan.md&lt;/code&gt;. I review the plan, tweak it, and correct assumptions &lt;em&gt;before&lt;/em&gt; a single line of code is written.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Execution&lt;/strong&gt;: Once the plan is solid, I tell Amp to implement it. And it just works.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Some prompts I actually use during planning:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;Plan how to add a blogroll page to this Astro site, but don&amp;#39;t write code yet&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;Read the codebase and explain how the content collections are structured&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;Take a look at &lt;code&gt;git diff&lt;/code&gt; and analyze what changed&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The planning step feels slow at first, but it saves you from going back and forth fixing things the AI got wrong because it didn&amp;#39;t understand your codebase.&lt;/p&gt;
&lt;p&gt;Here&amp;#39;s a trick I picked up: do your planning and context gathering in one thread, then start a new thread for implementation. Amp lets you reference previous threads, so the implementation thread can pull in the plan without carrying all the back-and-forth exploration that led to it. This keeps the context window clean and focused. A bloated thread full of research tangents makes the agent worse at following through on the actual work.&lt;/p&gt;
&lt;h2&gt;Agent modes&lt;/h2&gt;
&lt;p&gt;Amp has three modes, and picking the right one matters more than you&amp;#39;d think:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;smart&lt;/strong&gt;: The default. Uses the best models without constraints. This is what I use for most work, planning, refactoring, writing new features.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;rush&lt;/strong&gt;: Faster and cheaper, but less capable. Good for small, well-defined tasks like &amp;quot;fix the TypeScript error in this file&amp;quot; or &amp;quot;rename this variable everywhere.&amp;quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;deep&lt;/strong&gt;: Deep reasoning with extended thinking. I reach for this when I&amp;#39;m stuck on a complex bug or need to think through architecture decisions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I usually start in &lt;code&gt;smart&lt;/code&gt; for the planning phase, switch to &lt;code&gt;rush&lt;/code&gt; for quick fixes, and pull out &lt;code&gt;deep&lt;/code&gt; when something is genuinely hard to reason about.&lt;/p&gt;
&lt;h2&gt;Teaching Amp your codebase with AGENTS.md&lt;/h2&gt;
&lt;p&gt;This is something I wish I&amp;#39;d set up earlier. Amp reads &lt;code&gt;AGENTS.md&lt;/code&gt; files in your project to understand your codebase conventions, build commands, and project structure. Think of it as onboarding docs for the AI.&lt;/p&gt;
&lt;p&gt;For this blog, my &lt;code&gt;AGENTS.md&lt;/code&gt; tells Amp things like: use Bun, not npm. Run &lt;code&gt;bun run build&lt;/code&gt; to verify. Posts go in &lt;code&gt;src/data/blog/&lt;/code&gt;. Notes use timestamp filenames. Tags are arrays. That kind of thing.&lt;/p&gt;
&lt;p&gt;You can also put &lt;code&gt;AGENTS.md&lt;/code&gt; files in subdirectories for more specific instructions. The agent picks them up automatically. Beyond that, Amp supports a &lt;code&gt;.agents/&lt;/code&gt; directory in your project where you can store custom tools, skills, and code review checks. It&amp;#39;s a single place to keep all your AI-related project config. Once you have this set up, you stop repeating yourself in every prompt.&lt;/p&gt;
&lt;h2&gt;Features that actually matter&lt;/h2&gt;
&lt;p&gt;A few tools make this workflow work well in practice:&lt;/p&gt;
&lt;h3&gt;The Oracle&lt;/h3&gt;
&lt;p&gt;Amp has a built-in &amp;quot;second brain&amp;quot; called Oracle (powered by GPT-5.2 with reasoning). I can ask the main agent to consult the Oracle for complex logic or architectural decisions. It&amp;#39;s like having a senior principal engineer on speed dial who doesn&amp;#39;t mind being bothered every 5 minutes.&lt;/p&gt;
&lt;p&gt;For example: &amp;quot;Ask the Oracle to review this API design and suggest improvements.&amp;quot; The Oracle looks at the code, reasons about it, and gives you a second opinion.&lt;/p&gt;
&lt;h3&gt;The Librarian&lt;/h3&gt;
&lt;p&gt;This one changed how I approach the research phase. The Librarian can search my entire codebase (and even external repos on GitHub) to find context. It doesn&amp;#39;t look at just the open file; it understands how everything connects. That makes the research phase way more useful than manually grepping around.&lt;/p&gt;
&lt;h3&gt;Subagents&lt;/h3&gt;
&lt;p&gt;For repetitive or parallel tasks, Amp can spawn subagents. They work independently and keep the main context clean. A prompt like &amp;quot;Convert these 5 CSS files to Tailwind, use one subagent per file&amp;quot; actually works. Each subagent handles one file without polluting the main conversation.&lt;/p&gt;
&lt;h3&gt;Code review&lt;/h3&gt;
&lt;p&gt;You can run &lt;code&gt;amp review&lt;/code&gt; in the CLI and it reviews your staged changes for bugs, security issues, and style problems. You can even define custom checks in &lt;code&gt;.agents/checks/&lt;/code&gt; to codify your team&amp;#39;s conventions. Things linters don&amp;#39;t catch, like &amp;quot;don&amp;#39;t use raw SQL in the API layer&amp;quot; or &amp;quot;every new endpoint needs rate limiting.&amp;quot;&lt;/p&gt;
&lt;h3&gt;Skills&lt;/h3&gt;
&lt;p&gt;Skills are reusable instruction packages that teach Amp how to do specific tasks. Amp ships with some built-in ones (like code review and commit message generation), but you can create your own. I have a custom skill for scaffolding new blog posts and another for proofreading drafts. You can also share skills across projects by putting them in &lt;code&gt;~/.config/amp/skills/&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;Thread sharing&lt;/h3&gt;
&lt;p&gt;Every conversation with Amp is a thread, and you can share them. If I figure out a tricky migration or debug a weird issue, I can share the thread URL with someone and they can see exactly what happened, prompts, tool calls, code changes, everything. You wouldn&amp;#39;t code without version control, same idea.&lt;/p&gt;
&lt;h2&gt;CLI piping (&lt;code&gt;amp -x&lt;/code&gt;)&lt;/h2&gt;
&lt;p&gt;This is for the power users. You can pipe terminal output directly into Amp for quick one-off tasks without starting a full session.&lt;/p&gt;
&lt;p&gt;Generating a commit message from staged changes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git diff --staged | amp -x &amp;quot;Write a concise commit message. Output ONLY raw text.&amp;quot; | git commit -F -
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finding specific files:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;amp -x &amp;quot;what files in this folder are markdown?&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Renaming files intelligently:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ls *.jpg | amp -x &amp;quot;Generate bash commands to rename these to photo_001.jpg, etc.&amp;quot; | bash
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These one-liners are surprisingly useful for scripting and automation.&lt;/p&gt;
&lt;h2&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;Amp rewards structured thinking. If you treat it like a junior dev who needs clear instructions, it performs like a senior dev who never sleeps.&lt;/p&gt;
&lt;p&gt;Plan first, code second. The code is just the implementation detail.&lt;/p&gt;
</content:encoded></item><item><title>How I Built My Own AI News Anchor (Tech Watch)</title><link>https://rezhajul.io/posts/how-i-built-my-own-ai-news-anchor-tech-watch/</link><guid isPermaLink="true">https://rezhajul.io/posts/how-i-built-my-own-ai-news-anchor-tech-watch/</guid><description>I stopped doomscrolling and let an AI agent curate my tech news. Here&apos;s how I automated my information diet with OpenClaw and Node.js.</description><pubDate>Tue, 24 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I used to start my day by doomscrolling Twitter (or X, whatever), then Hacker News, then Lobsters, then maybe checking a few subreddits. By 10 AM, I was exhausted, annoyed, and full of &amp;quot;noise&amp;quot; but very little &amp;quot;signal.&amp;quot;&lt;/p&gt;
&lt;p&gt;The problem with the modern web isn&amp;#39;t a lack of information. It&amp;#39;s the volume. Algorithms are optimized for engagement, not for what I actually care about. I&amp;#39;d spend an hour scrolling, retain maybe two useful links, and come away irritated by some thread I never asked to see.&lt;/p&gt;
&lt;p&gt;So I decided to stop being my own news curator. I built an AI agent to do it for me.&lt;/p&gt;
&lt;h2&gt;What I wanted&lt;/h2&gt;
&lt;p&gt;I didn&amp;#39;t want a generic &amp;quot;Top 10 Tech News&amp;quot; list. I wanted a daily briefing that scans Hacker News and Lobsters, throws out the noise (political rage-bait, crypto scams, non-tech drama), and gives me short summaries of what actually matters. Delivered to Telegram, once a day, in a casual tone. That&amp;#39;s it.&lt;/p&gt;
&lt;p&gt;I named the agent Cici. She runs on OpenClaw with a simple Node.js script.&lt;/p&gt;
&lt;h2&gt;How it works&lt;/h2&gt;
&lt;p&gt;The whole thing is a &lt;code&gt;cron&lt;/code&gt; job and about 100 lines of JavaScript. No RAG pipeline, no vector database.&lt;/p&gt;
&lt;h3&gt;The fetcher (&lt;code&gt;techwatch.js&lt;/code&gt;)&lt;/h3&gt;
&lt;p&gt;A script hits the official APIs for Hacker News and Lobsters, grabs the top stories from the last 24 hours, and dumps them into a JSON file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;// Simplified logic
const hnStories = await fetchJson(&amp;#39;https://hacker-news.firebaseio.com/v0/topstories.json&amp;#39;);
const lobstersData = await fetchJson(&amp;#39;https://lobste.rs/hottest.json&amp;#39;);

// Filter for last 24h
const recentStories = allStories.filter(story =&amp;gt; story.time &amp;gt; (now - 24h));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This gives me roughly 50 stories to work with.&lt;/p&gt;
&lt;h3&gt;The curator (the agent)&lt;/h3&gt;
&lt;p&gt;Instead of writing regex or heuristics to decide what&amp;#39;s interesting, I just hand the JSON to the LLM. My prompt looks something like:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;Analyze these stories. Pick the top 5-10 that are relevant to a software engineer interested in AI, Linux, self-hosting, and open source. Avoid clickbait. Summarize them in a casual tone (Bahasa Indonesia/English mix).&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The LLM is genuinely good at this part. It can tell the difference between &amp;quot;just another JS framework&amp;quot; and &amp;quot;a critical OpenSSL vulnerability.&amp;quot; Regex could never do that.&lt;/p&gt;
&lt;h3&gt;The delivery (Telegram)&lt;/h3&gt;
&lt;p&gt;Once the summary is ready, the agent pushes it to my personal Telegram chat via the OpenClaw &lt;code&gt;message&lt;/code&gt; tool:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Cici: &amp;quot;Morning Ko! Here&amp;#39;s your tech briefing. The &lt;code&gt;xz&lt;/code&gt; utils backdoor is scary, but check out this cool new Rust CLI tool...&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;What it actually feels like to use&lt;/h2&gt;
&lt;p&gt;The first week, I kept opening Twitter out of habit. I&amp;#39;d catch myself mid-scroll, remember the briefing was already in Telegram, and close the app. After about two weeks the habit broke. Now I wake up, check Telegram, read Cici&amp;#39;s summary over coffee, and that&amp;#39;s my tech news for the day.&lt;/p&gt;
&lt;p&gt;Some mornings the briefing is five items. Some mornings it&amp;#39;s eight. Occasionally Cici picks something I wouldn&amp;#39;t have found on my own, like a blog post from a small open-source project I&amp;#39;d never heard of, or a discussion thread on Lobsters about NixOS packaging that turned out to be exactly what I needed for a weekend project.&lt;/p&gt;
&lt;p&gt;It&amp;#39;s not perfect. Sometimes the summaries are a bit shallow, and I have to click through to the original link anyway. Once or twice the LLM included something clearly off-topic (a cryptocurrency governance post somehow got past the filter). But the hit rate is honestly better than my own manual curation was. I was skimming headlines and missing things. Cici actually reads the content.&lt;/p&gt;
&lt;p&gt;The part I didn&amp;#39;t expect: it&amp;#39;s less stressful. I don&amp;#39;t see the outrage threads. I don&amp;#39;t get sucked into comment sections. I read the summary, click the two or three links that interest me, and move on. I&amp;#39;m saving maybe 30 to 60 minutes every morning, but the bigger win is the mental clarity. I start work focused instead of irritated.&lt;/p&gt;
&lt;h2&gt;What&amp;#39;s next&lt;/h2&gt;
&lt;p&gt;I&amp;#39;m thinking about expanding this to a few more areas:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Auto-summarizing long YouTube tech talks I keep bookmarking but never watch.&lt;/li&gt;
&lt;li&gt;Watching specific GitHub repos for new releases and summarizing the actual changelog, not just &amp;quot;bug fixes and improvements.&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you&amp;#39;re drowning in feeds and tabs, try building something like this. A Saturday morning project, start to finish.&lt;/p&gt;
</content:encoded></item><item><title>Building a Telegram bot to babysit 24,000 developers</title><link>https://rezhajul.io/posts/building-pythonid-bot-telegram-group-management/</link><guid isPermaLink="true">https://rezhajul.io/posts/building-pythonid-bot-telegram-group-management/</guid><description>How I built PythonID-bot to manage the Indonesian Python community on Telegram.</description><pubDate>Mon, 23 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The PythonID Telegram group has around 24,000 members. Indonesian developers chatting about Python, Django, FastAPI, job openings, and occasionally, cryptocurrency scams disguised as job offers.&lt;/p&gt;
&lt;p&gt;Moderation was manual. Admins would spot a spammer, ban them, delete the messages, and then do it again twenty minutes later. Bots existed but they were either too dumb (kick anyone without a profile photo) or too complex (require a PhD in YAML to configure). So I wrote my own.&lt;/p&gt;
&lt;h2&gt;What PythonID-bot actually does&lt;/h2&gt;
&lt;p&gt;The bot watches every message in the group. When a new user joins, it checks two things: do they have a profile photo, and do they have a username set. If not, they get a warning posted to a dedicated topic thread. If they ignore the warning for three hours, they get muted.&lt;/p&gt;
&lt;p&gt;That alone cut the spam by maybe 60%. Turns out most spam accounts don&amp;#39;t bother setting a profile picture.&lt;/p&gt;
&lt;p&gt;But 60% is not enough when you&amp;#39;re getting hit with crypto and gambling spam daily.&lt;/p&gt;
&lt;h3&gt;Captcha&lt;/h3&gt;
&lt;p&gt;New members can optionally be required to solve a button-based captcha within 60 seconds. It&amp;#39;s not a hard captcha. Press the right button. But it stops the bots that just join, dump a message, and leave.&lt;/p&gt;
&lt;p&gt;The captcha system also survives bot restarts. If the bot goes down while someone has a pending captcha, it recovers all pending challenges on startup and either re-schedules the timeout or expires them if time already ran out. I didn&amp;#39;t think about this until the bot crashed during a spam wave and five people were stuck in limbo.&lt;/p&gt;
&lt;h3&gt;Probation&lt;/h3&gt;
&lt;p&gt;Even after passing captcha, new users can&amp;#39;t send links, forwarded messages, or external quotes for three days. Some domains are whitelisted (docs.python.org, github.com, stackoverflow.com) because telling a new Python developer they can&amp;#39;t share a docs link would be absurd.&lt;/p&gt;
&lt;p&gt;The whitelist also includes about 150 Indonesian Telegram community groups so new users can share links to other local tech communities. First violation gets a warning. Third violation gets a mute. The message gets deleted either way.&lt;/p&gt;
&lt;h3&gt;DM unrestriction&lt;/h3&gt;
&lt;p&gt;This is the part I&amp;#39;m most pleased with. When someone gets muted, they get a link to DM the bot. The bot checks their profile, and if they&amp;#39;ve fixed whatever was wrong (added a photo, set a username), it automatically lifts the restriction and posts a note to the admin topic.&lt;/p&gt;
&lt;p&gt;Before this existed, restricted users had to message an admin and wait for someone to manually check and unrestrict them. Sometimes that took hours.&lt;/p&gt;
&lt;h2&gt;The multi-group problem&lt;/h2&gt;
&lt;p&gt;The bot started as a single-group thing. One &lt;code&gt;.env&lt;/code&gt; file, one &lt;code&gt;group_id&lt;/code&gt;, done. Then other Indonesian developer communities asked if they could use it too.&lt;/p&gt;
&lt;p&gt;Supporting multiple groups meant rethinking most of the architecture. Hendy Santika contributed the initial multi-group refactor, introducing a &lt;code&gt;GroupConfig&lt;/code&gt; Pydantic model and a &lt;code&gt;GroupRegistry&lt;/code&gt; that stores per-group settings. Each group gets its own warning topic, captcha toggle, and probation duration.&lt;/p&gt;
&lt;p&gt;Configuration moved from &lt;code&gt;.env&lt;/code&gt; to a &lt;code&gt;groups.json&lt;/code&gt; file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;[
  {
    &amp;quot;group_id&amp;quot;: -1001234567890,
    &amp;quot;warning_topic_id&amp;quot;: 123,
    &amp;quot;captcha_enabled&amp;quot;: true,
    &amp;quot;warning_time_threshold_minutes&amp;quot;: 180
  }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;.env&lt;/code&gt; fallback still works for single-group setups. I didn&amp;#39;t want to break the simple case.&lt;/p&gt;
&lt;h3&gt;Everything that broke&lt;/h3&gt;
&lt;p&gt;The multi-group refactor surfaced problems I hadn&amp;#39;t thought about.&lt;/p&gt;
&lt;p&gt;The captcha callback data was &lt;code&gt;captcha_verify:{user_id}&lt;/code&gt;. If a user joined two groups at the same time, the bot couldn&amp;#39;t tell which group the captcha was for. Fixed it to &lt;code&gt;captcha_verify_{group_id}_{user_id}&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The scheduler that auto-restricts users ran in a loop over all groups. If the bot got kicked from one group, the entire loop crashed and no other group got processed. Wrapping each group&amp;#39;s API call in a try/except fixed that.&lt;/p&gt;
&lt;p&gt;The DM unrestriction flow had a similar issue. A restricted user messages the bot privately, the bot checks their profile and lifts the restriction. But with multiple groups, it needed to check and unrestrict across all monitored groups, and handle cases where it no longer has access to some of them.&lt;/p&gt;
&lt;p&gt;I caught all of these during code review on the pull requests, before they hit production. They all looked obvious once I spotted them.&lt;/p&gt;
&lt;h2&gt;The Markdown incident&lt;/h2&gt;
&lt;p&gt;Telegram&amp;#39;s Markdown v1 parser is fragile. I spent an embarrassing amount of time debugging why some warning messages showed up as raw text instead of formatted messages.&lt;/p&gt;
&lt;p&gt;Two causes:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Parentheses inside link text break the parser. &lt;code&gt;[Contact Bot (for help)](url)&lt;/code&gt; does not work. Moving the parenthesized text outside the brackets fixed it.&lt;/li&gt;
&lt;li&gt;Usernames with underscores (&lt;code&gt;@Sharo_Kenne&lt;/code&gt;) get interpreted as italic markers. If the underscore doesn&amp;#39;t have a matching close, the entire message falls back to raw text.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The fix was calling &lt;code&gt;escape_markdown(username, version=1)&lt;/code&gt; for every username mention. A one-liner that took three hours to figure out.&lt;/p&gt;
&lt;h2&gt;The stack&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Python 3.11+ with &lt;code&gt;python-telegram-bot&lt;/code&gt; v20&lt;/li&gt;
&lt;li&gt;SQLite with WAL mode for the database (SQLModel/SQLAlchemy)&lt;/li&gt;
&lt;li&gt;Pydantic for configuration validation&lt;/li&gt;
&lt;li&gt;Logfire for structured logging&lt;/li&gt;
&lt;li&gt;&lt;code&gt;uv&lt;/code&gt; as the package manager&lt;/li&gt;
&lt;li&gt;Docker and Docker Compose for deployment&lt;/li&gt;
&lt;li&gt;442 tests, 99% coverage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The startup validates configuration (group IDs must be negative, timeouts must be within sane ranges) and the database sets WAL mode and &lt;code&gt;synchronous=NORMAL&lt;/code&gt; on init. These things sound boring but they prevent the kind of bugs that only show up at 2 AM on a Saturday.&lt;/p&gt;
&lt;h2&gt;What I&amp;#39;d do differently&lt;/h2&gt;
&lt;p&gt;I&amp;#39;d start with multi-group support from day one. Retrofitting it was the hardest part of the project. The single-group assumption was baked into every handler, every database query, every scheduler job. Pulling it out was like removing a load-bearing wall.&lt;/p&gt;
&lt;p&gt;I&amp;#39;d also write the DM unrestriction flow earlier. It reduced admin workload more than any other feature. People fix their profiles quickly when they know there&amp;#39;s an automated way back in.&lt;/p&gt;
&lt;p&gt;I run it for PythonID on my VPS alongside Miniflux, Mastodon, and a handful of other self-hosted services. JVM Indonesia, IDDevOps, and KotlinID run their own instances. Other communities picked it up and deployed it themselves without me having to do anything, which is exactly how open source should work.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/rezhajulio/PythonID-bot&quot;&gt;source code&lt;/a&gt; is on GitHub if you want to look at it or run your own instance.&lt;/p&gt;
</content:encoded></item><item><title>Forget Notion. I Manage My House in the Terminal Now.</title><link>https://rezhajul.io/posts/forget-notion-manage-house-in-terminal/</link><guid isPermaLink="true">https://rezhajul.io/posts/forget-notion-manage-house-in-terminal/</guid><description>Why I switched to Micasa, a TUI tool for home management, and why local-first software is the future.</description><pubDate>Sun, 22 Feb 2026 08:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Let&amp;#39;s be honest. Most &amp;quot;productivity&amp;quot; apps today are just procrastination tools in disguise.&lt;/p&gt;
&lt;p&gt;We spend more time setting up dashboards, picking tag colors, and waiting for loading spinners than actually doing the work. Especially for household chores. Tracking AC maintenance, inventory, grocery lists. Do you really need to wait for a massive React component to render just to write down &amp;quot;buy lightbulbs&amp;quot;?&lt;/p&gt;
&lt;p&gt;That&amp;#39;s why when I saw &lt;a href=&quot;https://micasa.dev&quot;&gt;Micasa&lt;/a&gt; popping up in my feed today, I knew I had to try it.&lt;/p&gt;
&lt;h2&gt;What is Micasa?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/cpcloud/micasa&quot;&gt;Micasa&lt;/a&gt; is a terminal application for managing your home. Not a to-do list with a house emoji slapped on it. It actually tracks the things homeowners deal with: maintenance schedules, appliances, vendors, projects, incidents, quotes, and documents. All from the command line.&lt;/p&gt;
&lt;p&gt;The creator built it because his home maintenance system was, in his words, &amp;quot;a shoebox of receipts and the vague feeling I was supposed to call someone about the roof.&amp;quot; I felt that in my bones.&lt;/p&gt;
&lt;p&gt;First impression? Fast. Genuinely, annoyingly fast. Written in Go, installed with &lt;code&gt;go install&lt;/code&gt; or a single binary. No ads, no tracking, no &amp;quot;Sign up with Google&amp;quot;. Just a text-based interface that responds before my fingers leave the keys.&lt;/p&gt;
&lt;h2&gt;What it actually tracks&lt;/h2&gt;
&lt;p&gt;This is where Micasa surprised me. It&amp;#39;s not just a list app. It has actual structure for household things that I used to scatter across Notion pages, WhatsApp reminders, and sticky notes on the fridge.&lt;/p&gt;
&lt;p&gt;Maintenance schedules with auto-computed due dates. So when I log that I changed the AC filter, it tells me when the next one is due. No more guessing &amp;quot;was it three months ago or six?&amp;quot;&lt;/p&gt;
&lt;p&gt;Appliance tracking with purchase dates and warranty status. My dishwasher&amp;#39;s warranty card? It&amp;#39;s in the SQLite file now, not in a drawer I&amp;#39;ll never open again. You can attach files directly to records: manuals, invoices, photos, all stored in the same database.&lt;/p&gt;
&lt;p&gt;A vendor directory that remembers who did what. Last time my AC needed servicing, I spent twenty minutes scrolling through WhatsApp to find the technician&amp;#39;s number. I&amp;#39;ve also called people to clean my grease trap several times over the last few years, and every single time I had to dig through old chats to find who I used last. Micasa keeps vendor contacts linked to every job they&amp;#39;ve done.&lt;/p&gt;
&lt;p&gt;Project tracking for those &amp;quot;someday&amp;quot; home improvements. From &amp;quot;napkin sketch to completion, or graceful abandonment,&amp;quot; as the docs put it. You can compare quotes side by side and see actual costs.&lt;/p&gt;
&lt;p&gt;Incident logging for when things break. My sink tap broke once and I had no idea who to call or whether it was still under the building&amp;#39;s warranty. With Micasa you log incidents with severity and location, link them to appliances and vendors, and mark them resolved when fixed.&lt;/p&gt;
&lt;p&gt;The interface uses vim-style modal keys, inspired by &lt;a href=&quot;https://www.visidata.org/&quot;&gt;VisiData&lt;/a&gt;. You can sort by any column, fuzzy-search to jump between fields, and hide columns you don&amp;#39;t care about. If you&amp;#39;ve used VisiData before, you&amp;#39;ll feel right at home. If you haven&amp;#39;t, run &lt;code&gt;micasa --demo&lt;/code&gt; to poke around with sample data before committing your own house to it.&lt;/p&gt;
&lt;h2&gt;Why TUI in 2026?&lt;/h2&gt;
&lt;p&gt;You might ask, &lt;em&gt;&amp;quot;Why use a black and white screen in this day and age? Doesn&amp;#39;t it hurt your eyes?&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Quite the opposite. When every web app is trying to be a heavy &amp;quot;Super App,&amp;quot; going back to TUI feels like drinking ice water on a scorching day.&lt;/p&gt;
&lt;p&gt;Your hands never leave the keyboard. &lt;code&gt;j&lt;/code&gt;, &lt;code&gt;k&lt;/code&gt;, &lt;code&gt;enter&lt;/code&gt; are faster than aiming a mouse at tiny buttons. There&amp;#39;s nothing on screen except the thing you&amp;#39;re actually trying to do. And it runs on anything, even a ten-year-old laptop. No need for 16GB RAM just to open a to-do list (looking at you, Electron apps).&lt;/p&gt;
&lt;h2&gt;Local-first or nothing&lt;/h2&gt;
&lt;p&gt;The thing that sold me on Micasa is that data stays on my machine. SQLite file, local disk. That&amp;#39;s it.&lt;/p&gt;
&lt;p&gt;I&amp;#39;m tired of the &amp;quot;SaaS Fatigue&amp;quot; cycle:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Find a great tool.&lt;/li&gt;
&lt;li&gt;Upload my life&amp;#39;s data to it.&lt;/li&gt;
&lt;li&gt;A year later, the startup goes bust or pivots to an &amp;quot;AI-Powered Enterprise Solution,&amp;quot; slashing free features.&lt;/li&gt;
&lt;li&gt;Data is lost or hard to export.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;With Micasa, none of that matters. Internet down? Developer retired? The binary still runs on my machine. I can backup the database myself via cron job, rsync, or Syncthing.&lt;/p&gt;
&lt;h2&gt;Try it&lt;/h2&gt;
&lt;p&gt;If you&amp;#39;re the type who feels opening a browser is &amp;quot;heavy&amp;quot; and prefers living inside a Tmux session, Micasa is worth a try.&lt;/p&gt;
&lt;p&gt;For me, the terminal is home. And now, my home is in the terminal.&lt;/p&gt;
</content:encoded></item><item><title>Upgrading four PostgreSQL instances to 18.2 in Docker</title><link>https://rezhajul.io/posts/upgrading-postgresql-18-docker/</link><guid isPermaLink="true">https://rezhajul.io/posts/upgrading-postgresql-18-docker/</guid><description>I upgraded Miniflux, Invidious, a Django app, and Mastodon from PostgreSQL 15/17 to 18.2. Every single one broke in a different way.</description><pubDate>Sat, 21 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I run a handful of self-hosted services on my server. Miniflux for RSS, Invidious for YouTube, a Django app, and Mastodon. They all use PostgreSQL, and they were all stuck on older versions -- 15 or 17, depending on when I last felt brave.&lt;/p&gt;
&lt;p&gt;PostgreSQL 18.2 came out and I figured it was time. Four databases, four upgrades, one evening. How hard could it be.&lt;/p&gt;
&lt;p&gt;It took two evenings.&lt;/p&gt;
&lt;h2&gt;The general pattern&lt;/h2&gt;
&lt;p&gt;Every upgrade followed the same rough steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Stop the containers&lt;/li&gt;
&lt;li&gt;Copy the data directory as a backup&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;tianon/postgres-upgrade&lt;/code&gt; to do the actual &lt;code&gt;pg_upgrade&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Fix the directory layout (more on this later)&lt;/li&gt;
&lt;li&gt;Fix &lt;code&gt;pg_hba.conf&lt;/code&gt; (more on this too)&lt;/li&gt;
&lt;li&gt;Update &lt;code&gt;docker-compose.yml&lt;/code&gt; to use &lt;code&gt;postgres:18&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Start everything up and hold your breath&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here&amp;#39;s what the backup looked like for every service:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose stop
cp -r ./pgdata ./pgdata-bak
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And the upgrade command, using Miniflux as an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker run --rm \
  -v /root/Docker/miniflux/pg17:/var/lib/postgresql/17/docker \
  -v /root/Docker/miniflux/pg18:/var/lib/postgresql/18/docker \
  -e PGDATAOLD=/var/lib/postgresql/17/docker \
  -e PGDATANEW=/var/lib/postgresql/18/docker \
  -e POSTGRES_INITDB_ARGS=&amp;quot;--no-data-checksums&amp;quot; \
  tianon/postgres-upgrade:17-to-18
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Simple enough. Except every instance found a new way to fail.&lt;/p&gt;
&lt;h2&gt;The data checksums problem&lt;/h2&gt;
&lt;p&gt;The first thing that blew up was Miniflux. The upgrade container initialized the new cluster with checksums enabled by default, but my old cluster had them off. The error:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;old cluster does not use data checksums but the new one does
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The fix is passing &lt;code&gt;--no-data-checksums&lt;/code&gt; in &lt;code&gt;POSTGRES_INITDB_ARGS&lt;/code&gt;. I hit this on every single upgrade because none of my old clusters used checksums. You would think I&amp;#39;d remember after the first time.&lt;/p&gt;
&lt;h2&gt;PostgreSQL 18 changed its directory layout&lt;/h2&gt;
&lt;p&gt;This was the one that ate the most time. PostgreSQL 18&amp;#39;s Docker image expects data in a version-specific subdirectory: &lt;code&gt;/var/lib/postgresql/18/docker&lt;/code&gt;. Previous versions just used &lt;code&gt;/var/lib/postgresql/data&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;tianon/postgres-upgrade&lt;/code&gt; image does the upgrade, but the output directory doesn&amp;#39;t always match what the official &lt;code&gt;postgres:18&lt;/code&gt; image expects. So after the upgrade, you need to shuffle directories around:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mkdir -p ./pgdata/18
mv ./pgdata/docker ./pgdata/18/docker
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then update your volume mount from &lt;code&gt;./pgdata:/var/lib/postgresql/data&lt;/code&gt; to &lt;code&gt;./pgdata:/var/lib/postgresql&lt;/code&gt;. Mount the parent, let the image find the subdirectory itself.&lt;/p&gt;
&lt;p&gt;I got this wrong at least twice. The container starts, PostgreSQL sees an empty data directory, initializes a fresh cluster, and your data is just sitting in the wrong folder while a brand new empty database happily accepts connections. Terrifying if you don&amp;#39;t realize what happened.&lt;/p&gt;
&lt;h2&gt;pg_hba.conf resets every time&lt;/h2&gt;
&lt;p&gt;The upgrade process runs &lt;code&gt;initdb&lt;/code&gt;, which generates a fresh &lt;code&gt;pg_hba.conf&lt;/code&gt;. The fresh config only allows connections from &lt;code&gt;127.0.0.1/32&lt;/code&gt;. In Docker, your app container is on a different IP. So the app can&amp;#39;t connect.&lt;/p&gt;
&lt;p&gt;After every upgrade I had to edit &lt;code&gt;pg_hba.conf&lt;/code&gt; to allow connections from the Docker network. The default Docker bridge subnet is &lt;code&gt;172.16.0.0/12&lt;/code&gt;, so scope it to that instead of opening it to the world:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;host    all    all    172.16.0.0/12    scram-sha-256
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you&amp;#39;re using a custom Docker network with a different subnet, check it with &lt;code&gt;docker network inspect &amp;lt;network_name&amp;gt;&lt;/code&gt; and use that CIDR instead. Don&amp;#39;t use &lt;code&gt;0.0.0.0/0&lt;/code&gt; with &lt;code&gt;trust&lt;/code&gt; -- even on an internal network, there&amp;#39;s no reason to skip authentication entirely.&lt;/p&gt;
&lt;h2&gt;Invidious and the custom superuser&lt;/h2&gt;
&lt;p&gt;Invidious uses a custom PostgreSQL superuser instead of the default &lt;code&gt;postgres&lt;/code&gt;. The upgrade tool assumes &lt;code&gt;postgres&lt;/code&gt; unless you tell it otherwise, so the first attempt died with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;FATAL: role &amp;quot;postgres&amp;quot; does not exist
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The fix was passing the username everywhere:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker run --rm \
  -v /root/Docker/invidious/postgresdata:/var/lib/postgresql/17/data \
  -v /root/Docker/invidious/pgdata:/var/lib/postgresql/18 \
  -e POSTGRES_INITDB_ARGS=&amp;quot;--no-data-checksums --username=myuser&amp;quot; \
  -e PGUSER=myuser \
  tianon/postgres-upgrade:17-to-18 \
  --username=myuser
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Three separate places to specify the username. Miss any one of them and it fails with a different error each time.&lt;/p&gt;
&lt;h2&gt;The Django app: jumping from 15 to 18&lt;/h2&gt;
&lt;p&gt;The Django app was still on PostgreSQL 15. I was worried about skipping versions, but &lt;code&gt;tianon&lt;/code&gt; provides a &lt;code&gt;15-to-18&lt;/code&gt; upgrade image and &lt;code&gt;pg_upgrade&lt;/code&gt; handles major version jumps fine. Django 4.2 supports PostgreSQL 12 and above, so 18 was within the compatibility window.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker run --rm \
  -v /root/Docker/myapp/postgres:/var/lib/postgresql/15/data \
  -v /root/Docker/myapp/pgdata:/var/lib/postgresql/18 \
  -e POSTGRES_INITDB_ARGS=&amp;quot;--no-data-checksums&amp;quot; \
  -e PGUSER=postgres \
  tianon/postgres-upgrade:15-to-18 \
  --username=postgres
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Same directory restructuring, same &lt;code&gt;pg_hba.conf&lt;/code&gt; dance. At least by this point I had a routine.&lt;/p&gt;
&lt;h2&gt;Mastodon&lt;/h2&gt;
&lt;p&gt;Mastodon v4.5.x requires PostgreSQL 14 or newer, so 18 is well within range. The upgrade itself was identical to Miniflux -- same version jump (17 to 18), same checksums issue, same directory fix.&lt;/p&gt;
&lt;p&gt;The only thing that made me nervous was the database size. Mastodon accumulates a lot of data. But &lt;code&gt;pg_upgrade&lt;/code&gt; uses hard links by default, so even large databases upgrade quickly without doubling disk usage.&lt;/p&gt;
&lt;h2&gt;Post-upgrade cleanup&lt;/h2&gt;
&lt;p&gt;After every upgrade, PostgreSQL complained about collation version mismatches:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;WARNING: database &amp;quot;invidious&amp;quot; has a collation version mismatch
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The operating system in the new container ships a newer glibc with updated collation definitions. The fix:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;ALTER DATABASE invidious REFRESH COLLATION VERSION;
ALTER DATABASE postgres REFRESH COLLATION VERSION;
ALTER DATABASE template1 REFRESH COLLATION VERSION;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then rebuild the statistics so the query planner isn&amp;#39;t working with stale data:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker compose exec db vacuumdb -U postgres --all --analyze-in-stages --missing-stats-only
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Updated docker-compose.yml&lt;/h2&gt;
&lt;p&gt;The final compose config for each service looked roughly like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;services:
  db:
    image: postgres:18
    volumes:
      - ./pgdata:/var/lib/postgresql
      - /etc/localtime:/etc/localtime:ro
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The key change is the volume mount. Mount the parent directory, not the data directory directly.&lt;/p&gt;
&lt;h2&gt;What I&amp;#39;d do differently&lt;/h2&gt;
&lt;p&gt;Write a script. I did four upgrades manually and made the same mistakes repeatedly. The steps are mechanical: stop, backup, run the upgrade image, restructure directories, fix &lt;code&gt;pg_hba.conf&lt;/code&gt;, update compose, start. A bash script with the service name and source version as arguments would&amp;#39;ve saved me an evening.&lt;/p&gt;
&lt;p&gt;I&amp;#39;d also check the checksum setting beforehand. You can see it with &lt;code&gt;SHOW data_checksums;&lt;/code&gt; inside the running database, or &lt;code&gt;pg_controldata&lt;/code&gt; on the data directory. Knowing upfront avoids the &amp;quot;oh right, checksums&amp;quot; moment on every single upgrade.&lt;/p&gt;
&lt;p&gt;All four services are running on 18.2 now. The databases are fine. The backups are still sitting there, in directories named &lt;code&gt;pg17-backup&lt;/code&gt; and &lt;code&gt;postgres-bak&lt;/code&gt;, and I will definitely forget to clean them up.&lt;/p&gt;
</content:encoded></item><item><title>4 Root Causes Hiding Behind One Airflow Error</title><link>https://rezhajul.io/posts/debugging-airflow-executor-reports-failed/</link><guid isPermaLink="true">https://rezhajul.io/posts/debugging-airflow-executor-reports-failed/</guid><description>A debugging session that started with one Airflow error and uncovered four cascading failures across Docker images, Kubernetes caching, namespace boundaries, and Python string iteration.</description><pubDate>Fri, 20 Feb 2026 05:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Last week I burned most of a day chasing a single Airflow error that turned out to be four problems stacked on top of each other. Each fix revealed the next failure, like one of those Russian nesting dolls except each one is angrier than the last.&lt;/p&gt;
&lt;p&gt;The setup: Apache Airflow running on Kubernetes with the KubernetesExecutor. DAGs mounted via a shared volume. Worker pods spin up per task using a Docker image we build in CI.&lt;/p&gt;
&lt;h2&gt;The symptom&lt;/h2&gt;
&lt;p&gt;Tasks that should be running were showing up as &lt;code&gt;queued&lt;/code&gt; in the Airflow UI, then immediately flipping to &lt;code&gt;failed&lt;/code&gt;. The scheduler logs had this gem:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Executor reports task instance finished (failed) although the task says it&amp;#39;s queued.
Was the task killed externally?
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;No, nothing was killed externally. But okay, let&amp;#39;s figure out what&amp;#39;s going on.&lt;/p&gt;
&lt;h2&gt;Root cause 1: the Docker image wasn&amp;#39;t rebuilt&lt;/h2&gt;
&lt;p&gt;I started by checking the worker pod logs. The pods were crashing on startup with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ImportError: cannot import name &amp;#39;SlaCallback&amp;#39; from &amp;#39;plugins.message_callback&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A colleague had recently added an &lt;code&gt;SlaCallback&lt;/code&gt; class to &lt;code&gt;message_callback.py&lt;/code&gt; in our plugins directory. The scheduler could see it fine because the plugins directory is volume-mounted. But the worker pods don&amp;#39;t use the volume mount for Python imports. They use whatever is baked into the Docker image.&lt;/p&gt;
&lt;p&gt;The image hadn&amp;#39;t been rebuilt since the new class was added. The file existed on disk (via the mount), but Python&amp;#39;s import resolution was picking up the old version from the image&amp;#39;s site-packages.&lt;/p&gt;
&lt;p&gt;Fix: rebuild the Docker image with the latest code and push it to our registry.&lt;/p&gt;
&lt;p&gt;Done. Easy. Except not really.&lt;/p&gt;
&lt;h2&gt;Root cause 2: Kubernetes image caching&lt;/h2&gt;
&lt;p&gt;After pushing the new image, I restarted the Airflow deployment. Same error. Same &lt;code&gt;ImportError&lt;/code&gt;. I double-checked the registry. The new image was there, with the correct code. So why were the pods still running the old one?&lt;/p&gt;
&lt;p&gt;I ran &lt;code&gt;kubectl describe pod&lt;/code&gt; on one of the worker pods and looked at the image digest. It was the old digest. The tag was the same (&lt;code&gt;latest&lt;/code&gt; equivalent for our setup), but the node had already pulled that tag before. Since &lt;code&gt;imagePullPolicy&lt;/code&gt; was set to &lt;code&gt;IfNotPresent&lt;/code&gt;, Kubernetes looked at the tag, saw it already had an image with that tag cached locally, and used the cached version.&lt;/p&gt;
&lt;p&gt;This is one of those things you know intellectually but forget in practice. If you push a new image with the same tag, Kubernetes nodes that already pulled the old image will keep using it.&lt;/p&gt;
&lt;p&gt;Fix: deploy with a new unique tag. We switched to using the git commit SHA as the image tag, which is what we should have been doing all along. Pods pulled the new image, and the &lt;code&gt;ImportError&lt;/code&gt; went away.&lt;/p&gt;
&lt;p&gt;Progress. But now a different error.&lt;/p&gt;
&lt;h2&gt;Root cause 3: environment variables in the wrong namespace&lt;/h2&gt;
&lt;p&gt;The import was fixed. &lt;code&gt;SlaCallback&lt;/code&gt; loaded correctly. But when an SLA miss actually fired, the callback crashed with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ValueError: MESSAGE_CALLBACK_ENDPOINT is not set
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This env var was defined in a ConfigMap in the &lt;code&gt;pipelines&lt;/code&gt; namespace, where our DAG worker pods run. That makes sense for tasks. But SLA callbacks don&amp;#39;t run in worker pods. They run inside the scheduler process, which lives in the &lt;code&gt;airflow&lt;/code&gt; namespace.&lt;/p&gt;
&lt;p&gt;The scheduler had no idea &lt;code&gt;MESSAGE_CALLBACK_ENDPOINT&lt;/code&gt; existed because it was never injected into the scheduler&amp;#39;s environment. The ConfigMap was scoped to the wrong namespace, and the Helm chart for the scheduler didn&amp;#39;t reference it.&lt;/p&gt;
&lt;p&gt;Fix: add the env var to the scheduler deployment through our Terraform config. We added it to the &lt;code&gt;extraEnv&lt;/code&gt; section in the Helm values for the Airflow scheduler. Applied the change, scheduler restarted, env var present.&lt;/p&gt;
&lt;p&gt;SLA callback ran without crashing. Notification sent. I checked Slack expecting a clean message.&lt;/p&gt;
&lt;p&gt;What I got was gibberish.&lt;/p&gt;
&lt;h2&gt;Root cause 4: iterating over a string instead of objects&lt;/h2&gt;
&lt;p&gt;The notification message looked something like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;SLA Miss for tasks: m, y, _, t, a, s, k, _, i, d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Individual characters. The task ID had been split into single characters. That&amp;#39;s what happens when you iterate over a string in Python thinking it&amp;#39;s a list.&lt;/p&gt;
&lt;p&gt;I opened the callback code. The relevant bit looked roughly like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def sla_callback(dag, task_list, **kwargs):
    for task in task_list:
        send_notification(task_id=task.task_id)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Looks fine, right? The problem was that &lt;code&gt;task_list&lt;/code&gt; wasn&amp;#39;t always a list of &lt;code&gt;SlaMiss&lt;/code&gt; objects. In some code paths, it was a string representation that got passed through. The &lt;code&gt;for task in task_list&lt;/code&gt; line was iterating over characters of the string &lt;code&gt;&amp;quot;my_task_id&amp;quot;&lt;/code&gt; instead of a list containing a &lt;code&gt;SlaMiss&lt;/code&gt; object.&lt;/p&gt;
&lt;p&gt;The fix was to properly handle the input:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def sla_callback(dag, task_list, **kwargs):
    if isinstance(task_list, str):
        task_ids = [task_list]
    else:
        task_ids = [t.task_id for t in task_list]

    for task_id in task_ids:
        send_notification(task_id=task_id)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After this fix, the notification came through correctly. Clean task IDs, proper formatting.&lt;/p&gt;
&lt;p&gt;Four bugs. One error message.&lt;/p&gt;
&lt;h2&gt;What I took away from this&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;One error can hide three more.&lt;/strong&gt; The original &lt;code&gt;Executor reports task instance finished (failed)&lt;/code&gt; message pointed at the first problem, but fixing it just uncovered the next one. I&amp;#39;ve started keeping a scratch note during debugging sessions so I can track whether I&amp;#39;m actually making progress or just peeling layers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Always check the image digest, not the tag.&lt;/strong&gt; &lt;code&gt;kubectl describe pod&lt;/code&gt; shows you exactly which image digest is running. If you&amp;#39;re reusing tags (please don&amp;#39;t), the digest is the only truth. We now use commit SHAs as tags and set &lt;code&gt;imagePullPolicy: Always&lt;/code&gt; for our staging environment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Namespace boundaries are real walls.&lt;/strong&gt; It&amp;#39;s easy to forget that the scheduler and worker pods can live in different namespaces with different ConfigMaps and Secrets. If your callback code needs an env var, make sure it&amp;#39;s available where the callback actually executes, not just where the tasks run.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Test with real scenarios.&lt;/strong&gt; We had unit tests for the SLA callback, but they passed in a proper list of mock &lt;code&gt;SlaMiss&lt;/code&gt; objects. The bug only showed up with real SLA miss data flowing through Airflow&amp;#39;s actual callback mechanism. Sometimes you need to trigger the real thing in a staging environment to catch these issues.&lt;/p&gt;
&lt;p&gt;The whole thing took about four hours from first error to clean notification. Not my worst debugging session, but definitely the most layered one. Each fix felt like a victory until the next error showed up. At least now the SLA notifications work, and we have commit-SHA image tags as a bonus.&lt;/p&gt;
</content:encoded></item><item><title>Chasing a Transitive Dependency Vulnerability</title><link>https://rezhajul.io/posts/chasing-a-transitive-dependency-vulnerability/</link><guid isPermaLink="true">https://rezhajul.io/posts/chasing-a-transitive-dependency-vulnerability/</guid><description>How a vulnerability in fast-xml-parser affected my blog through @astrojs/rss, and why transitive dependencies are quietly terrifying.</description><pubDate>Thu, 19 Feb 2026 05:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Last week I found out my blog was shipping a vulnerable dependency. Not one I installed directly, but one hiding two levels deep in the dependency tree: &lt;code&gt;fast-xml-parser&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;The Vulnerability&lt;/h2&gt;
&lt;p&gt;Versions 4.3.6 through 5.3.3 of &lt;code&gt;fast-xml-parser&lt;/code&gt; throw a &lt;code&gt;RangeError&lt;/code&gt; when they encounter out-of-range numeric character entities like &lt;code&gt;&amp;amp;#9999999;&lt;/code&gt; or &lt;code&gt;&amp;amp;#xFFFFFF;&lt;/code&gt;. That might sound obscure, but it means any application that processes untrusted XML input can be crashed with a carefully crafted payload.&lt;/p&gt;
&lt;p&gt;For a blog, the attack surface is the RSS feed. This blog uses &lt;code&gt;@astrojs/rss&lt;/code&gt; to generate feeds, and &lt;code&gt;@astrojs/rss&lt;/code&gt; depends on &lt;code&gt;fast-xml-parser@^5.3.3&lt;/code&gt;. If someone sent malformed XML to any endpoint that parses it, the whole thing could go down.&lt;/p&gt;
&lt;h2&gt;How I Found It&lt;/h2&gt;
&lt;p&gt;I was doing a routine &lt;code&gt;bun audit&lt;/code&gt; (okay, I was procrastinating on actually writing) and the report flagged &lt;code&gt;fast-xml-parser&lt;/code&gt;. I checked my &lt;code&gt;package.json&lt;/code&gt; and didn&amp;#39;t see it listed anywhere. Took me a minute to realize it was coming in through &lt;code&gt;@astrojs/rss&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;bun pm ls | grep fast-xml-parser
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Sure enough, there it was. A transitive dependency I never explicitly chose to install.&lt;/p&gt;
&lt;h2&gt;The Annoying Part&lt;/h2&gt;
&lt;p&gt;The fix for &lt;code&gt;fast-xml-parser&lt;/code&gt; already existed in version 5.3.4. But &lt;code&gt;@astrojs/rss&lt;/code&gt; hadn&amp;#39;t released an update that bumped its dependency yet. So I couldn&amp;#39;t just run &lt;code&gt;bun update @astrojs/rss&lt;/code&gt; and move on.&lt;/p&gt;
&lt;p&gt;This is the part about transitive dependencies that bugs me. You&amp;#39;re trusting that your dependencies keep &lt;em&gt;their&lt;/em&gt; dependencies up to date. When they don&amp;#39;t, or when they&amp;#39;re slow about it, you&amp;#39;re left in an awkward spot. Wait and stay vulnerable, or take matters into your own hands.&lt;/p&gt;
&lt;p&gt;I took matters into my own hands.&lt;/p&gt;
&lt;h2&gt;The Fix&lt;/h2&gt;
&lt;p&gt;The solution was to use &lt;code&gt;overrides&lt;/code&gt; in &lt;code&gt;package.json&lt;/code&gt; to force the resolved version of &lt;code&gt;fast-xml-parser&lt;/code&gt; to the patched release:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;overrides&amp;quot;: {
    &amp;quot;fast-xml-parser&amp;quot;: &amp;quot;^5.3.4&amp;quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I also added it as a &lt;code&gt;devDependency&lt;/code&gt; so Dependabot would pick it up and notify me about future updates:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;devDependencies&amp;quot;: {
    &amp;quot;fast-xml-parser&amp;quot;: &amp;quot;^5.3.4&amp;quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After running &lt;code&gt;bun install&lt;/code&gt;, I double-checked &lt;code&gt;bun.lock&lt;/code&gt; to make sure the resolved version was actually 5.3.4+. It was.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;grep -A 2 &amp;#39;fast-xml-parser&amp;#39; bun.lock
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Build passed, RSS feeds still generated correctly. Done.&lt;/p&gt;
&lt;h2&gt;The uncomfortable truth about transitive deps&lt;/h2&gt;
&lt;p&gt;You carefully vet the libraries you install. But each of those libraries pulls in its own tree of dependencies that you probably never look at. With npm or bun, your actual dependency tree is way larger than what&amp;#39;s in &lt;code&gt;package.json&lt;/code&gt;. A vulnerability anywhere in that tree is your problem, even if you didn&amp;#39;t put it there.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;overrides&lt;/code&gt; approach works, but it&amp;#39;s a hack. You&amp;#39;re pinning a version that your direct dependency didn&amp;#39;t ask for, and there&amp;#39;s always a small risk of breaking something. For a patch bump that fixes a security issue, it&amp;#39;s almost always fine. But it&amp;#39;s one more thing to remember to clean up once upstream catches up.&lt;/p&gt;
&lt;p&gt;Run &lt;code&gt;bun audit&lt;/code&gt; regularly. Learn how &lt;code&gt;overrides&lt;/code&gt; work. And if you override something, add it as a &lt;code&gt;devDependency&lt;/code&gt; too so Dependabot can keep an eye on it for you. Check back later and remove the override once the upstream package updates.&lt;/p&gt;
&lt;p&gt;Maintaining a website means maintaining everything under it, including the stuff you didn&amp;#39;t choose to install.&lt;/p&gt;
</content:encoded></item><item><title>Managing DNS as Code with DNSControl</title><link>https://rezhajul.io/posts/managing-dns-as-code-with-dnscontrol/</link><guid isPermaLink="true">https://rezhajul.io/posts/managing-dns-as-code-with-dnscontrol/</guid><description>How I stopped editing DNS records in web UIs and started versioning them in Git. Includes fixing Porkbun provider quirks and handling ACME challenges.</description><pubDate>Thu, 19 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I manage a few domains. Nothing crazy, but enough that logging into Porkbun&amp;#39;s web UI every time I need to change a record got old fast. Add a TXT record, typo the value, wait for propagation, realize the typo, fix it, wait again. Fifteen minutes gone for a one-line change.&lt;/p&gt;
&lt;p&gt;The real problem is that web UIs don&amp;#39;t have undo. They don&amp;#39;t have diffs. They don&amp;#39;t have commit messages. I changed a DNS record last month and I genuinely could not remember what it was before. Was the TTL 300 or 3600? Did I have a CNAME there or an A record? No idea.&lt;/p&gt;
&lt;p&gt;So I moved everything to &lt;a href=&quot;https://docs.dnscontrol.org/&quot;&gt;DNSControl&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;What DNSControl actually does&lt;/h2&gt;
&lt;p&gt;DNSControl is an open-source tool from StackExchange (the folks behind Stack Overflow). You define your DNS records in a &lt;code&gt;dnsconfig.js&lt;/code&gt; file, and DNSControl talks to your DNS provider&amp;#39;s API to make reality match your config.&lt;/p&gt;
&lt;p&gt;The workflow is:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Edit &lt;code&gt;dnsconfig.js&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;dnscontrol preview&lt;/code&gt; to see what would change&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;dnscontrol push&lt;/code&gt; to apply it&lt;/li&gt;
&lt;li&gt;Commit to Git&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That&amp;#39;s it. Your DNS records are now version-controlled. If something breaks, &lt;code&gt;git log&lt;/code&gt; tells you exactly what changed and when. &lt;code&gt;git revert&lt;/code&gt; brings it back.&lt;/p&gt;
&lt;h2&gt;The config file&lt;/h2&gt;
&lt;p&gt;DNSControl uses JavaScript for its config. Not full Node.js, just a simple DSL that happens to use JS syntax. Here&amp;#39;s a stripped-down version of what mine looks like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;var REG_CHANGEME = NewRegistrar(&amp;quot;none&amp;quot;);
var DSP_PORKBUN = NewDnsProvider(&amp;quot;porkbun&amp;quot;);

D(&amp;quot;example.com&amp;quot;, REG_CHANGEME,
    DnsProvider(DSP_PORKBUN),

    A(&amp;quot;@&amp;quot;, &amp;quot;1.2.3.4&amp;quot;),
    A(&amp;quot;homelab&amp;quot;, &amp;quot;1.2.3.4&amp;quot;),

    CNAME(&amp;quot;www&amp;quot;, &amp;quot;example.com.&amp;quot;),

    MX(&amp;quot;@&amp;quot;, 10, &amp;quot;mail.example.com.&amp;quot;),

    TXT(&amp;quot;@&amp;quot;, &amp;quot;v=spf1 mx ~all&amp;quot;),
    TXT(&amp;quot;_dmarc&amp;quot;, &amp;quot;v=DMARC1; p=quarantine&amp;quot;),
);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Each record type has its own function. &lt;code&gt;A()&lt;/code&gt;, &lt;code&gt;CNAME()&lt;/code&gt;, &lt;code&gt;MX()&lt;/code&gt;, &lt;code&gt;TXT()&lt;/code&gt;. If you&amp;#39;ve ever written a DNS zone file, this reads the same way but without the confusing syntax.&lt;/p&gt;
&lt;p&gt;You also need a &lt;code&gt;creds.json&lt;/code&gt; file with your provider&amp;#39;s API keys:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &amp;quot;porkbun&amp;quot;: {
        &amp;quot;TYPE&amp;quot;: &amp;quot;PORKBUN&amp;quot;,
        &amp;quot;api_key&amp;quot;: &amp;quot;pk1_...&amp;quot;,
        &amp;quot;secret_api_key&amp;quot;: &amp;quot;sk1_...&amp;quot;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This file never gets committed. It goes in &lt;code&gt;.gitignore&lt;/code&gt; immediately.&lt;/p&gt;
&lt;h2&gt;Preview before you push&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;preview&lt;/code&gt; command is where the safety net lives. It shows you a diff of what DNSControl would do, without actually doing it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ dnscontrol preview
******************** Domain: example.com
1 correction
#1: CREATE A homelab.example.com 1.2.3.4 ttl=300
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I run this obsessively. DNS mistakes propagate to caches everywhere, and depending on your TTL, you might be stuck with a bad record for hours. Seeing the diff first saves you from that.&lt;/p&gt;
&lt;h2&gt;Moving to GitHub Actions&lt;/h2&gt;
&lt;p&gt;Running &lt;code&gt;dnscontrol push&lt;/code&gt; locally works, but I wanted it automated. Push to &lt;code&gt;main&lt;/code&gt;, DNS updates. No manual steps.&lt;/p&gt;
&lt;h3&gt;The third-party action trap&lt;/h3&gt;
&lt;p&gt;My first attempt used &lt;code&gt;wblondel/dnscontrol-action@v4.27.1&lt;/code&gt;. It worked fine for basic records. Then I tried to add a &lt;code&gt;URL301&lt;/code&gt; redirect. Porkbun supports URL forwarding natively, and I wanted to set up a redirect from an old subdomain.&lt;/p&gt;
&lt;p&gt;The push failed with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;INFO#1: Domain &amp;quot;example.com&amp;quot; provider porkbun Error: porkbun.toReq rtype &amp;quot;URL301&amp;quot; unimplemented
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I spent a while thinking this was a Porkbun API issue. It wasn&amp;#39;t. DNSControl added &lt;code&gt;URL301&lt;/code&gt; support for Porkbun in v4.30.0, but the GitHub Action I was using had v4.27.1 baked into its Dockerfile. The action&amp;#39;s maintainer hadn&amp;#39;t updated it.&lt;/p&gt;
&lt;p&gt;This is the problem with wrapper actions. You&amp;#39;re at the mercy of someone else&amp;#39;s release schedule. The action pins a specific DNSControl version, and if that version is missing the feature you need, you&amp;#39;re stuck until the maintainer gets around to updating.&lt;/p&gt;
&lt;h3&gt;Using the official Docker image&lt;/h3&gt;
&lt;p&gt;The fix is to skip the wrapper and use the official DNSControl Docker image from GitHub Container Registry:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;name: DNS Push

on:
  push:
    branches:
      - main

jobs:
  push:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Decode credentials
        run: |
          echo &amp;quot;${{ secrets.CREDS }}&amp;quot; | base64 -d &amp;gt; creds.json

      - name: DNSControl preview
        run: &amp;gt;
          docker run --rm -v &amp;quot;$PWD:/dns&amp;quot; -w /dns
          ghcr.io/stackexchange/dnscontrol:4.34.0 preview

      - name: DNSControl push
        run: &amp;gt;
          docker run --rm -v &amp;quot;$PWD:/dns&amp;quot; -w /dns
          ghcr.io/stackexchange/dnscontrol:4.34.0 push
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A couple of things I tripped over while setting this up:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Image tag format.&lt;/strong&gt; The official image tags don&amp;#39;t have a &lt;code&gt;v&lt;/code&gt; prefix. Use &lt;code&gt;4.34.0&lt;/code&gt;, not &lt;code&gt;v4.34.0&lt;/code&gt;. I pulled the wrong tag twice before noticing.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Credentials as a secret.&lt;/strong&gt; I Base64-encode the entire &lt;code&gt;creds.json&lt;/code&gt; and store it as a GitHub secret called &lt;code&gt;CREDS&lt;/code&gt;. The workflow decodes it at runtime. This way the API keys never appear in the repo, not even in environment variables that might leak in logs.&lt;/p&gt;
&lt;p&gt;To encode it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;base64 -w 0 creds.json | pbcopy  # macOS
base64 -w 0 creds.json | xclip   # Linux
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Paste the result into your repo&amp;#39;s Settings &amp;gt; Secrets &amp;gt; Actions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Running preview before push.&lt;/strong&gt; I added a &lt;code&gt;preview&lt;/code&gt; step before &lt;code&gt;push&lt;/code&gt; so the action logs show exactly what changed. If the preview step fails (say, a syntax error in &lt;code&gt;dnsconfig.js&lt;/code&gt;), the push never runs. Cheap safety net.&lt;/p&gt;
&lt;h2&gt;The ACME challenge problem&lt;/h2&gt;
&lt;p&gt;This one caught me off guard.&lt;/p&gt;
&lt;p&gt;I use Caddy as my reverse proxy, and Caddy handles SSL certificates automatically through Let&amp;#39;s Encrypt. Part of that process involves creating &lt;code&gt;_acme-challenge&lt;/code&gt; TXT records for domain validation. These records are short-lived. Caddy creates them, Let&amp;#39;s Encrypt reads them, and Caddy deletes them.&lt;/p&gt;
&lt;p&gt;The conflict: DNSControl doesn&amp;#39;t know about these records because they&amp;#39;re not in &lt;code&gt;dnsconfig.js&lt;/code&gt;. So every time I push, DNSControl sees them as unauthorized drift and tries to delete them. If Caddy happens to be in the middle of a certificate renewal, DNSControl just nuked its validation record.&lt;/p&gt;
&lt;p&gt;Even when the timing didn&amp;#39;t overlap, my CI logs were full of &amp;quot;corrections&amp;quot; for records I never created:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#3: DELETE TXT _acme-challenge.example.com &amp;quot;some-long-token-here&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Annoying at best. Breaks certificate renewals at worst.&lt;/p&gt;
&lt;h3&gt;The fix: IGNORE()&lt;/h3&gt;
&lt;p&gt;DNSControl has an &lt;code&gt;IGNORE()&lt;/code&gt; function that tells it to leave certain records alone:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;D(&amp;quot;example.com&amp;quot;, REG_CHANGEME,
    DnsProvider(DSP_PORKBUN),

    A(&amp;quot;@&amp;quot;, &amp;quot;1.2.3.4&amp;quot;),
    CNAME(&amp;quot;www&amp;quot;, &amp;quot;example.com.&amp;quot;),
    MX(&amp;quot;@&amp;quot;, 10, &amp;quot;mail.example.com.&amp;quot;),
    TXT(&amp;quot;@&amp;quot;, &amp;quot;v=spf1 mx ~all&amp;quot;),

    // Let Caddy manage ACME challenges
    IGNORE(&amp;quot;_acme-challenge&amp;quot;, &amp;quot;TXT&amp;quot;),
);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With this in place, DNSControl pretends &lt;code&gt;_acme-challenge&lt;/code&gt; TXT records don&amp;#39;t exist. It won&amp;#39;t create them, delete them, or complain about them. Caddy does its thing, DNSControl does its thing, and they don&amp;#39;t step on each other.&lt;/p&gt;
&lt;p&gt;This pattern works for any external system that creates DNS records. If you use Kubernetes ExternalDNS, you&amp;#39;d ignore whatever prefix it uses. If you have a third-party email provider that manages its own DKIM records, same idea.&lt;/p&gt;
&lt;p&gt;You can also use wildcards:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;IGNORE(&amp;quot;_acme-challenge.**&amp;quot;, &amp;quot;TXT&amp;quot;),  // All subdomains too
IGNORE(&amp;quot;**&amp;quot;, &amp;quot;TXT&amp;quot;, &amp;quot;some-specific-value&amp;quot;),  // Match by value
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Splitting zones into separate files&lt;/h2&gt;
&lt;p&gt;Once I had more than two domains in &lt;code&gt;dnsconfig.js&lt;/code&gt;, the file got long. Scrolling past 80 lines of records for one domain to find the one I actually want to edit is annoying. DNSControl supports &lt;code&gt;require()&lt;/code&gt;, so I split each domain into its own file.&lt;/p&gt;
&lt;p&gt;The directory structure:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;dns/
├── dnsconfig.js
├── creds.json
└── zones/
    ├── constants.js
    ├── example.com.js
    ├── example.org.js
    └── another-domain.id.js
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The main &lt;code&gt;dnsconfig.js&lt;/code&gt; becomes a table of contents:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;require(&amp;quot;zones/constants.js&amp;quot;)

// Porkbun domains
require(&amp;quot;zones/example.com.js&amp;quot;)
require(&amp;quot;zones/example.org.js&amp;quot;)

// Cloudflare domains
require(&amp;quot;zones/another-domain.id.js&amp;quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I keep shared values in &lt;code&gt;constants.js&lt;/code&gt;. IP addresses mostly. When I migrate a service to a new server, I change the IP in one place instead of hunting through every zone file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;HOMELAB_IPV4 = &amp;quot;1.2.3.4&amp;quot;;
VPS_IPV4 = &amp;quot;5.6.7.8&amp;quot;;
BACKUP_IPV4 = &amp;quot;9.10.11.12&amp;quot;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then each zone file uses those variables:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;var DSP_PORKBUN = NewDnsProvider(&amp;quot;porkbun&amp;quot;);
var REG_CHANGEME = NewRegistrar(&amp;quot;none&amp;quot;);

D(&amp;quot;example.com&amp;quot;, REG_CHANGEME,
    DnsProvider(DSP_PORKBUN),
    DefaultTTL(600),

    A(&amp;quot;@&amp;quot;, HOMELAB_IPV4),
    A(&amp;quot;homelab&amp;quot;, HOMELAB_IPV4),
    A(&amp;quot;vpn&amp;quot;, VPS_IPV4),

    CNAME(&amp;quot;www&amp;quot;, &amp;quot;example.com.&amp;quot;),
    MX(&amp;quot;@&amp;quot;, 10, &amp;quot;mail.example.com.&amp;quot;),
    TXT(&amp;quot;@&amp;quot;, &amp;quot;v=spf1 mx ~all&amp;quot;),

    IGNORE(&amp;quot;_acme-challenge&amp;quot;, &amp;quot;TXT&amp;quot;),
);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The nice thing about this setup is that each file is self-contained. When I need to update DNS for a specific domain, I open that one file. The diff in Git is clean too. A commit that says &amp;quot;add VPN subdomain to example.com&amp;quot; only touches &lt;code&gt;zones/example.com.js&lt;/code&gt;. No noise from other domains.&lt;/p&gt;
&lt;p&gt;I also group the &lt;code&gt;require()&lt;/code&gt; calls by provider in the main config. Makes it easy to see which domains live where. When I eventually move a domain from Porkbun to Cloudflare, I just move the require line and update the provider in the zone file.&lt;/p&gt;
&lt;h2&gt;Things I wish I knew earlier&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Test with a throwaway domain first.&lt;/strong&gt; I tested DNSControl against my main domain on the first try. Nothing went wrong, but in hindsight that was reckless. Buy a cheap &lt;code&gt;.xyz&lt;/code&gt; domain and experiment there.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;TTL matters more than you think.&lt;/strong&gt; When I was iterating on the config, I set my TTL to 300 seconds (5 minutes). Once everything was stable, I bumped it to 3600. Lower TTL means faster propagation of changes, but also more DNS queries hitting your provider.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;dnscontrol get-zones&lt;/code&gt; is useful for initial setup.&lt;/strong&gt; If you already have a bunch of records in Porkbun, you don&amp;#39;t need to type them all out manually. Run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;dnscontrol get-zones --format=js porkbun PORKBUN example.com
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It dumps your existing records as a &lt;code&gt;dnsconfig.js&lt;/code&gt; snippet. Copy, paste, clean up. Saved me a lot of manual transcription.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Keep &lt;code&gt;creds.json&lt;/code&gt; out of your shell history too.&lt;/strong&gt; If you&amp;#39;re echo-ing API keys into a file, prefix the command with a space so it doesn&amp;#39;t get saved to &lt;code&gt;.bash_history&lt;/code&gt; (assuming &lt;code&gt;HISTCONTROL=ignorespace&lt;/code&gt; is set, which it is by default on most distros).&lt;/p&gt;
&lt;h2&gt;Was it worth it?&lt;/h2&gt;
&lt;p&gt;For one domain with five records? Probably not. The web UI is faster.&lt;/p&gt;
&lt;p&gt;But I have multiple domains, and the record count keeps growing as I add more services to the homelab. Last week I added three records in one commit for a new service. Preview, push, done. No browser tabs, no copy-pasting IPs, no wondering if I remembered to save.&lt;/p&gt;
&lt;p&gt;The Git history is the real win. I can see exactly when I added that weird CNAME, who asked for it (past me, in the commit message), and why. When something breaks, I don&amp;#39;t guess. I look at the log.&lt;/p&gt;
&lt;p&gt;DNS is one of those things where mistakes are annoying and slow to fix. Having a &lt;code&gt;preview&lt;/code&gt; command and a &lt;code&gt;git revert&lt;/code&gt; escape hatch makes me a lot less nervous about touching it.&lt;/p&gt;
</content:encoded></item><item><title>Building Almanac Games: Scraping PSN with Duct Tape and Playwright</title><link>https://rezhajul.io/posts/building-almanac-games-psn-scraper/</link><guid isPermaLink="true">https://rezhajul.io/posts/building-almanac-games-psn-scraper/</guid><description>I wanted to show my PlayStation trophies on my blog. It took a Python scraper, Cloudflare cookie theft, and way too many regex fixes.</description><pubDate>Wed, 18 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I track movies and TV shows on my blog&amp;#39;s Almanac page. It&amp;#39;s a timeline of everything I&amp;#39;ve watched, sorted by date. I liked it. Then I thought: what about games?&lt;/p&gt;
&lt;p&gt;I have 367 games on my PSN account. 108 Platinums. Over 3,500 trophies. That data is sitting on PSNProfiles, publicly visible. How hard could it be to pull it into my blog?&lt;/p&gt;
&lt;p&gt;Pretty hard, actually.&lt;/p&gt;
&lt;h2&gt;The scraper&lt;/h2&gt;
&lt;p&gt;PSNProfiles doesn&amp;#39;t have an API. So I wrote a Python scraper using Playwright and BeautifulSoup. Playwright handles the browser automation, BeautifulSoup parses the HTML.&lt;/p&gt;
&lt;p&gt;The scraper navigates to my profile, waits for the page to load, then extracts each game row: title, platform, trophy breakdown, completion percentage, rank, and cover image URL.&lt;/p&gt;
&lt;p&gt;Simple enough in theory. Then Cloudflare showed up.&lt;/p&gt;
&lt;h2&gt;Cloudflare ruins everything&lt;/h2&gt;
&lt;p&gt;PSNProfiles sits behind Cloudflare. You can&amp;#39;t just &lt;code&gt;requests.get()&lt;/code&gt; the page. You get a challenge, and if you don&amp;#39;t solve it, you get nothing.&lt;/p&gt;
&lt;p&gt;My workaround: Playwright with a persistent browser context. First run uses &lt;code&gt;--headed&lt;/code&gt; mode so I can manually solve the Cloudflare challenge. The browser session is saved to a &lt;code&gt;browser_data/&lt;/code&gt; directory. Future runs reuse that session headlessly.&lt;/p&gt;
&lt;p&gt;But I also needed the Cloudflare cookie for downloading images later. So the scraper dumps the &lt;code&gt;cf_clearance&lt;/code&gt; cookie to a text file after each run. Feels hacky. Works fine.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def dump_cf_clearance(context):
    cookies = context.cookies()
    for cookie in cookies:
        if cookie[&amp;quot;name&amp;quot;] == &amp;quot;cf_clearance&amp;quot;:
            with open(&amp;quot;data/cf_clearance.txt&amp;quot;, &amp;quot;w&amp;quot;) as f:
                f.write(cookie[&amp;quot;value&amp;quot;])
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;The image problem&lt;/h2&gt;
&lt;p&gt;PSNProfiles hosts game covers on &lt;code&gt;img.psnprofiles.com&lt;/code&gt;. I tried hotlinking them. 403. Tried routing through &lt;code&gt;wsrv.nl&lt;/code&gt; as a proxy. Also 403. Cloudflare blocks everything that doesn&amp;#39;t come from a real browser session.&lt;/p&gt;
&lt;p&gt;So I gave up on hotlinking and mirrored every image locally. A TypeScript script reads &lt;code&gt;games.json&lt;/code&gt;, extracts each game&amp;#39;s ID from the cover URL, and downloads the large version using the stolen &lt;code&gt;cf_clearance&lt;/code&gt; cookie.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;function extractGameId(coverUrl: string): string | null {
  const match = coverUrl.match(/game\/[sl]\/(\d+)\//);
  return match ? match[1] : null;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;364 out of 367 games downloaded. The missing three had broken URLs on PSNProfiles itself. I can live with that.&lt;/p&gt;
&lt;h2&gt;Trophy parsing is weird&lt;/h2&gt;
&lt;p&gt;PSNProfiles formats trophy counts differently depending on completion. If you&amp;#39;ve earned 12 out of 91 trophies, it says &amp;quot;12 of 91 Trophies&amp;quot;. Normal.&lt;/p&gt;
&lt;p&gt;But if you&amp;#39;ve earned all of them, it says &amp;quot;All 91 Trophies&amp;quot;. My regex only handled the first format. Every Platinum game showed NaN trophies. Took me longer than I&amp;#39;d like to admit to notice.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;trophy_match = re.search(r&amp;quot;(\d+)\s*of\s*(\d+)\s*Trophies?&amp;quot;, trophy_text)
all_match = re.search(r&amp;quot;All\s*(\d+)\s*Trophies?&amp;quot;, trophy_text)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There was also a whitespace issue. BeautifulSoup&amp;#39;s &lt;code&gt;get_text(strip=True)&lt;/code&gt; sometimes removes spaces between words, turning &amp;quot;1 of 91 Trophies&amp;quot; into &amp;quot;1of91Trophies&amp;quot;. The regex needed &lt;code&gt;\s*&lt;/code&gt; instead of literal spaces. Classic scraping pain.&lt;/p&gt;
&lt;h2&gt;The sync pipeline&lt;/h2&gt;
&lt;p&gt;Updating the blog with fresh game data involves three steps, glued together with a bash script:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Copy &lt;code&gt;games.json&lt;/code&gt; from the scraper directory into the Astro project&lt;/li&gt;
&lt;li&gt;Read the &lt;code&gt;cf_clearance&lt;/code&gt; cookie&lt;/li&gt;
&lt;li&gt;Run the Bun image downloader to grab any missing covers&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/bin/bash
cp ../psngames/data/games.json src/data/psn/games.json
CF_COOKIE=$(cat ../psngames/data/cf_clearance.txt)
bun run scripts/download-psn-covers.ts &amp;quot;$CF_COOKIE&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Not elegant. But it runs in under a minute and I don&amp;#39;t have to think about it.&lt;/p&gt;
&lt;h2&gt;Putting it on the page&lt;/h2&gt;
&lt;p&gt;The Almanac already had utilities for movies and TV shows. I added a &lt;code&gt;psn.ts&lt;/code&gt; utility that normalizes the scraped data and a &lt;code&gt;GameCard.astro&lt;/code&gt; component for rendering.&lt;/p&gt;
&lt;p&gt;One design issue: PSNProfiles serves covers in two sizes. Some are 320x176 landscape banners, others are 320x320 squares. My initial card used a portrait aspect ratio, which cropped everything badly. Switching to &lt;code&gt;aspect-square&lt;/code&gt; with &lt;code&gt;object-contain&lt;/code&gt; fixed it. The landscape banners sit inside the square with some padding. Not perfect, but nothing gets cut off.&lt;/p&gt;
&lt;p&gt;The unified almanac view merges movies, shows, and games into one chronological list. A function in &lt;code&gt;almanac.ts&lt;/code&gt; sorts everything by date. Now my Almanac page shows what I watched, what I played, and when.&lt;/p&gt;
&lt;h2&gt;What I&amp;#39;d do differently&lt;/h2&gt;
&lt;p&gt;The Cloudflare dance is fragile. If my browser session expires, I have to manually re-solve the challenge. There&amp;#39;s no way around this without paying for a proxy service, and I&amp;#39;m not doing that for a hobby project.&lt;/p&gt;
&lt;p&gt;I also should have started with local image mirroring instead of trying three different proxy services first. Would have saved me a couple hours of yak-shaving.&lt;/p&gt;
&lt;h2&gt;Anyway&lt;/h2&gt;
&lt;p&gt;The whole thing is held together with a Python scraper, a bash script, a TypeScript image downloader, and a Cloudflare cookie I&amp;#39;m smuggling between processes. It&amp;#39;s not pretty. But my blog now shows 367 games with trophy progress, and the Almanac page has a unified timeline of everything I consume.&lt;/p&gt;
&lt;p&gt;Sometimes the best architecture is the one you can explain in one sentence: scrape it, download the images, render it.&lt;/p&gt;
</content:encoded></item><item><title>Redesigning my blog, part 5: what I learned</title><link>https://rezhajul.io/posts/redesigning-my-blog-what-i-learned/</link><guid isPermaLink="true">https://rezhajul.io/posts/redesigning-my-blog-what-i-learned/</guid><description>The last post in this series. What worked, what got immediately deleted, and what I&apos;d do differently.</description><pubDate>Tue, 17 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The redesign is done. For now. It&amp;#39;s never really done, because personal sites are the software equivalent of a car that&amp;#39;s always in the garage with the hood up. But the major work is finished, the series is wrapping up, and I want to look back at what actually worked and what didn&amp;#39;t.&lt;/p&gt;
&lt;p&gt;This isn&amp;#39;t a postmortem. It&amp;#39;s a blog post. I&amp;#39;ll keep it honest.&lt;/p&gt;
&lt;h2&gt;Phased implementation beats big-bang rewrites&lt;/h2&gt;
&lt;p&gt;The single best decision I made was splitting the redesign into phases. Audit first, then new features (links collection, blogroll), then the visual overhaul, then polish. Each phase was independently deployable. When the typography changes broke the note layout, I could revert that one commit without losing the trailing slash fixes from two weeks earlier.&lt;/p&gt;
&lt;p&gt;I&amp;#39;ve tried the big-bang approach before. Change everything at once, in a branch that lives for three months, and merge it all in one terrifying afternoon. The result was always the same: nothing shipped. The branch would drift so far from main that merging felt like defusing a bomb. Eventually I&amp;#39;d just abandon it and start over.&lt;/p&gt;
&lt;p&gt;Phases meant I could ship something every week. Momentum matters more than perfection.&lt;/p&gt;
&lt;h2&gt;Things that got immediately deleted&lt;/h2&gt;
&lt;p&gt;Some ideas didn&amp;#39;t survive first contact with the actual site.&lt;/p&gt;
&lt;p&gt;I added a &amp;quot;shipping data pipes&amp;quot; status line to the header. Thought it was funny and on-brand. Looked at it for about five minutes, felt embarrassed, and deleted it. It was trying too hard to be quirky.&lt;/p&gt;
&lt;p&gt;I also renamed the navigation labels. &amp;quot;Posts&amp;quot; became &amp;quot;Logs&amp;quot;, &amp;quot;Links&amp;quot; became &amp;quot;Bookmarks&amp;quot;, and &amp;quot;About&amp;quot; became &amp;quot;./about&amp;quot; in a terminal-cosplay phase. My GF asked me where the blog posts were. I almost changed them back, but the dropdown items inside still say &amp;quot;Posts&amp;quot;, &amp;quot;Notes&amp;quot;, &amp;quot;About&amp;quot;, so once you click, everything makes sense. The top-level labels are just flavoring. They&amp;#39;ve grown on me. Keeping them.&lt;/p&gt;
&lt;p&gt;Then there were the hover animations. I had this elaborate scale-and-glow effect on the post cards. Looked great in isolation, on a page with one card. On a page with eight cards, it felt like the site was having a seizure. Stripped it down to a simple color shift.&lt;/p&gt;
&lt;h2&gt;Audit before you design&lt;/h2&gt;
&lt;p&gt;This is the thing I&amp;#39;d tell past-me if I could. The &lt;a href=&quot;/posts/redesigning-my-blog-the-audit/&quot;&gt;audit&lt;/a&gt; caught problems that would have been painful to fix later.&lt;/p&gt;
&lt;p&gt;The trailing slash inconsistency was the big one. Some internal links had trailing slashes, some didn&amp;#39;t. Astro treats these as different routes. If I&amp;#39;d done the visual redesign first and then tried to fix URLs, I&amp;#39;d have broken every internal link I&amp;#39;d just carefully styled.&lt;/p&gt;
&lt;p&gt;Self-hosting fonts early also paid off. By the time I swapped in JetBrains Mono during the visual phase, the infrastructure was already there. No FOUT debugging at the last minute.&lt;/p&gt;
&lt;p&gt;Same with the render-blocking theme script. Fixing that during the audit meant I had a faster baseline before adding more CSS complexity. Performance problems are easier to prevent than to diagnose after the fact.&lt;/p&gt;
&lt;h2&gt;Accessibility is easier to build in than bolt on&lt;/h2&gt;
&lt;p&gt;I used &lt;code&gt;&amp;lt;details&amp;gt;&lt;/code&gt; for the dropdown navigation, which gave me keyboard support for free. No custom JavaScript state management, no aria attribute juggling. The browser just handles it.&lt;/p&gt;
&lt;p&gt;The view transition work had some nasty memory leak potential. &lt;code&gt;setInterval&lt;/code&gt; timers that weren&amp;#39;t being cleared on page transitions, duplicate event listeners stacking up. I caught these during implementation because I was thinking about cleanup from the start. Using &lt;code&gt;AbortController&lt;/code&gt; for event listener cleanup became a pattern I used everywhere. It&amp;#39;s one of those things where the upfront cost is tiny and the debugging cost of not doing it is enormous.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;prefers-reduced-motion&lt;/code&gt; went in as part of the visual redesign, not as an afterthought. The typewriter animation respects it. The page transitions respect it. Adding it later would have meant auditing every animation individually.&lt;/p&gt;
&lt;h2&gt;AstroPaper is a good base&lt;/h2&gt;
&lt;p&gt;I like AstroPaper. The content collections work well, the RSS generation is solid, and the overall structure makes sense. The Astro content layer does what it&amp;#39;s supposed to do.&lt;/p&gt;
&lt;p&gt;The challenge is making it look like yours and not like a theme demo. I found that typography and layout changes go much further than color changes. You can swap the entire palette and the site still reads as &amp;quot;AstroPaper with different colors.&amp;quot; Change the font stack, adjust the line height, widen the content area, and suddenly it feels different.&lt;/p&gt;
&lt;p&gt;I kept all the core functionality: Pagefind search, RSS and Atom feeds, dynamic OG images. Didn&amp;#39;t need to rebuild any of that. The work was almost entirely in the presentation layer.&lt;/p&gt;
&lt;h2&gt;Honest assessment&lt;/h2&gt;
&lt;p&gt;The blog looks like mine now. That was the goal and I think I got there.&lt;/p&gt;
&lt;p&gt;Some things are still rough. There&amp;#39;s no mobile sidebar, which is intentional but also lazy. I went with a simplified header nav on mobile instead of building a proper slide-out menu. It works, but it&amp;#39;s not great.&lt;/p&gt;
&lt;p&gt;The typewriter animation on the homepage is fun. I like it. I also suspect some visitors find it annoying. I&amp;#39;m keeping it anyway because this is my site and I get to be a little self-indulgent.&lt;/p&gt;
&lt;p&gt;I&amp;#39;ll keep tweaking things. That&amp;#39;s what personal sites are for. They&amp;#39;re never finished, and that&amp;#39;s fine.&lt;/p&gt;
&lt;h2&gt;The series&lt;/h2&gt;
&lt;p&gt;If you&amp;#39;ve followed along, here&amp;#39;s the full list:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;/posts/redesigning-my-blog-the-audit/&quot;&gt;Part 1: The audit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/posts/redesigning-my-blog-the-blogroll/&quot;&gt;Part 2: The blogroll&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/posts/redesigning-my-blog-terminal-blueprint/&quot;&gt;Part 3: From stock theme to terminal blueprint&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/posts/redesigning-my-blog-the-small-stuff/&quot;&gt;Part 4: The small stuff&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 5: What I learned (you&amp;#39;re here)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you&amp;#39;re thinking about redesigning your blog, start with the audit. Ship in phases. Delete things that try too hard. The rest sorts itself out.&lt;/p&gt;
</content:encoded></item><item><title>Redesigning my blog, part 4: the small stuff</title><link>https://rezhajul.io/posts/redesigning-my-blog-the-small-stuff/</link><guid isPermaLink="true">https://rezhajul.io/posts/redesigning-my-blog-the-small-stuff/</guid><description>Dropdown navigation, a categorized tech stack, file organization, and auto-publishing. The polishing work nobody notices but everyone feels.</description><pubDate>Mon, 16 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;After the big layout changes from part 3, I had a blog that &lt;em&gt;looked&lt;/em&gt; different but still felt half-done. The nav was overcrowded. My content directory was a mess. Scheduled posts required me to manually trigger deploys. None of this was visible to readers, but it bugged me every time I opened the project.&lt;/p&gt;
&lt;p&gt;This post is about all those little things.&lt;/p&gt;
&lt;h2&gt;The nav problem&lt;/h2&gt;
&lt;p&gt;My old navigation had every page listed as a flat row of links: Posts, Notes, Links, Tags, Now, Uses, Almanac, Blogroll, About, Archives, Search, plus a theme toggle. Twelve items. On mobile it was a scrolling disaster, and even on desktop it felt cluttered.&lt;/p&gt;
&lt;p&gt;The fix was grouping. I ended up with three dropdown menus: &lt;strong&gt;Logs&lt;/strong&gt; (Posts, Notes, Tags), &lt;strong&gt;Bookmarks&lt;/strong&gt; (Links, Blogroll), and &lt;strong&gt;./about&lt;/strong&gt; (About, Now, Uses, Almanac). Archives, Search, and Theme stayed as icon buttons on the right side.&lt;/p&gt;
&lt;p&gt;For the dropdowns I went with native &lt;code&gt;&amp;lt;details&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;summary&amp;gt;&lt;/code&gt; elements instead of building a custom component from scratch. They handle keyboard interaction out of the box. Enter and Space toggle them, they have proper ARIA states, and screen readers understand them without extra work from me.&lt;/p&gt;
&lt;p&gt;I did need a little JavaScript on top. Three things: opening one dropdown should close the others, clicking outside should close all of them, and pressing Escape should close them too. Here&amp;#39;s the relevant bits from the Header script:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const dropdowns =
  document.querySelectorAll(&amp;quot;[data-dropdown]&amp;quot;);

// Opening one closes others
dropdowns.forEach(dropdown =&amp;gt; {
  dropdown.addEventListener(&amp;quot;toggle&amp;quot;, () =&amp;gt; {
    if (dropdown.open) {
      dropdowns.forEach(other =&amp;gt; {
        if (other !== dropdown &amp;amp;&amp;amp; other.open) {
          other.open = false;
        }
      });
    }
  }, { signal });
});

// Click outside to close
document.addEventListener(&amp;quot;click&amp;quot;, event =&amp;gt; {
  dropdowns.forEach(dropdown =&amp;gt; {
    if (dropdown.open &amp;amp;&amp;amp; !dropdown.contains(event.target)) {
      dropdown.open = false;
    }
  });
}, { signal });

// Escape to close
document.addEventListener(&amp;quot;keydown&amp;quot;, event =&amp;gt; {
  if (event.key === &amp;quot;Escape&amp;quot;) {
    dropdowns.forEach(dropdown =&amp;gt; {
      if (dropdown.open) {
        dropdown.open = false;
        dropdown.querySelector(&amp;quot;summary&amp;quot;)?.focus();
      }
    });
  }
}, { signal });
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice the &lt;code&gt;{ signal }&lt;/code&gt; on every listener? That&amp;#39;s an &lt;code&gt;AbortController&lt;/code&gt; pattern. Without it, Astro&amp;#39;s view transitions cause a specific problem: each page navigation re-runs the script, registering duplicate event listeners. After navigating five pages, you&amp;#39;d have five copies of each handler. The cleanup looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;let navAbort = null;

function initNavigation() {
  navAbort?.abort();
  navAbort = new AbortController();
  const { signal } = navAbort;

  // ... all listeners use { signal } ...
}

initNavigation();
document.addEventListener(&amp;quot;astro:after-swap&amp;quot;, initNavigation);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Every time &lt;code&gt;initNavigation&lt;/code&gt; runs, it aborts the previous controller, which automatically removes all listeners tied to that signal. Then it creates a fresh controller for the new set. Clean.&lt;/p&gt;
&lt;h2&gt;A tech stack that reflects reality&lt;/h2&gt;
&lt;p&gt;My sidebar had a tech stack section, but it was basically a wishlist. I&amp;#39;d thrown in things I barely touched and left out things I use daily.&lt;/p&gt;
&lt;p&gt;I wanted to make it honest. So I grep&amp;#39;d through all 107 markdown files in the blog directory and counted mentions. Python showed up 288 times. Docker 121 times. Kafka, Spark, and BigQuery all had real representation from my data engineering posts. Meanwhile, some entries were LinkedIn pollution, things like &amp;quot;Chemistry&amp;quot; and &amp;quot;Microsoft Word&amp;quot; that somehow ended up in my skills list years ago. Those got cut.&lt;/p&gt;
&lt;p&gt;I cross-referenced the blog counts with my recent GitHub repos to pick up things I actively use but haven&amp;#39;t blogged about yet, like Zig and Hono and Cloudflare Workers.&lt;/p&gt;
&lt;p&gt;The result is a typed constant in &lt;code&gt;constants.ts&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;export const TECH_STACK: Record&amp;lt;string, string[]&amp;gt; = {
  Languages: [
    &amp;quot;Python&amp;quot;, &amp;quot;Golang&amp;quot;, &amp;quot;Java&amp;quot;, &amp;quot;JavaScript&amp;quot;,
    &amp;quot;TypeScript&amp;quot;, &amp;quot;PHP&amp;quot;, &amp;quot;Bash&amp;quot;, &amp;quot;SQL&amp;quot;, &amp;quot;Rust&amp;quot;,
  ],
  &amp;quot;Data &amp;amp; Streaming&amp;quot;: [
    &amp;quot;Spark&amp;quot;, &amp;quot;Kafka&amp;quot;, &amp;quot;Airflow&amp;quot;, &amp;quot;BigQuery&amp;quot;,
    &amp;quot;Redshift&amp;quot;, &amp;quot;MaxCompute&amp;quot;, &amp;quot;Jupyter&amp;quot;,
  ],
  Databases: [&amp;quot;PostgreSQL&amp;quot;, &amp;quot;MySQL&amp;quot;, &amp;quot;MongoDB&amp;quot;, &amp;quot;SQLite&amp;quot;, &amp;quot;Redis&amp;quot;],
  Cloud: [&amp;quot;AWS&amp;quot;, &amp;quot;GCP&amp;quot;, &amp;quot;Alibaba Cloud&amp;quot;, &amp;quot;Cloudflare&amp;quot;],
  &amp;quot;Web &amp;amp; Frameworks&amp;quot;: [&amp;quot;Django&amp;quot;, &amp;quot;Flask&amp;quot;, &amp;quot;Astro&amp;quot;, &amp;quot;FastAPI&amp;quot;, &amp;quot;Hono&amp;quot;],
  DevOps: [
    &amp;quot;Docker&amp;quot;, &amp;quot;Kubernetes&amp;quot;, &amp;quot;Terraform&amp;quot;, &amp;quot;Nginx&amp;quot;,
    &amp;quot;Caddy&amp;quot;, &amp;quot;Github&amp;quot;, &amp;quot;GitLab&amp;quot;, &amp;quot;Linux&amp;quot;, &amp;quot;Ansible&amp;quot;,
  ],
  AI: [&amp;quot;Claude&amp;quot;, &amp;quot;OpenAI&amp;quot;, &amp;quot;LMStudio&amp;quot;, &amp;quot;OpenClaw&amp;quot;],
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Rendering it is straightforward. Loop over the keys, render each category as a heading, and list the items underneath. The categories give it structure without turning it into a resume.&lt;/p&gt;
&lt;h2&gt;Year-prefix file migration&lt;/h2&gt;
&lt;p&gt;This one&amp;#39;s boring but I&amp;#39;m glad I did it. I had 105 blog posts all sitting in &lt;code&gt;src/data/blog/&lt;/code&gt; with names like &lt;code&gt;some-post.md&lt;/code&gt;. No chronological ordering. Finding a specific post meant scrolling through an alphabetical list and guessing.&lt;/p&gt;
&lt;p&gt;I wanted to rename everything to &lt;code&gt;YYYY-some-post.md&lt;/code&gt;. The year prefix makes the directory scannable at a glance and groups posts by era.&lt;/p&gt;
&lt;p&gt;The catch is that Astro&amp;#39;s glob loader derives entry IDs from filenames. Rename &lt;code&gt;some-post.md&lt;/code&gt; to &lt;code&gt;2024-some-post.md&lt;/code&gt; and suddenly the URL changes from &lt;code&gt;/posts/some-post&lt;/code&gt; to &lt;code&gt;/posts/2024-some-post&lt;/code&gt;. That breaks every existing link.&lt;/p&gt;
&lt;p&gt;The solution: add an explicit &lt;code&gt;slug&lt;/code&gt; field to each post&amp;#39;s frontmatter &lt;em&gt;before&lt;/em&gt; renaming. Astro respects &lt;code&gt;slug&lt;/code&gt; as an override for the filename-based ID. I wrote a migration script:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ts&quot;&gt;async function processFiles() {
  const files = (await readdir(CONTENT_DIR))
    .filter(f =&amp;gt; f.endsWith(&amp;quot;.md&amp;quot;));

  for (const filename of files) {
    const filepath = join(CONTENT_DIR, filename);
    const originalSlug = filename.replace(&amp;quot;.md&amp;quot;, &amp;quot;&amp;quot;);
    let content = await readFile(filepath, &amp;quot;utf-8&amp;quot;);

    const fmMatch = content.match(/^---\n([\s\S]*?)\n---/);
    if (!fmMatch) continue;

    let fmContent = fmMatch[1];

    // Add slug if missing
    if (!fmContent.includes(&amp;quot;slug:&amp;quot;)) {
      fmContent = fmContent.replace(
        /(title:.*)/,
        `$1\nslug: &amp;quot;${originalSlug}&amp;quot;`
      );
    }

    // Get year from pubDatetime
    const dateMatch = fmContent.match(
      /pubDatetime: [&amp;quot;&amp;#39;]?(\d{4})-\d{2}-\d{2}/
    );
    if (!dateMatch) continue;

    const year = dateMatch[1];
    const newFilename = `${year}-${originalSlug}.md`;

    // Write updated frontmatter, then rename
    content = content.replace(
      /^---\n[\s\S]*?\n---/,
      `---\n${fmContent}\n---`
    );
    await writeFile(filepath, content);
    await rename(filepath, join(CONTENT_DIR, newFilename));
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It backs up the directory first, processes each file in order, skips anything already prefixed, and logs what it does. I ran it once, verified no URLs broke, and deleted the backup.&lt;/p&gt;
&lt;p&gt;I also updated the content scaffolding script (&lt;code&gt;bun run new&lt;/code&gt;) to automatically prefix new posts with the current year. One less thing to think about.&lt;/p&gt;
&lt;h2&gt;Scheduled posts and auto-publishing&lt;/h2&gt;
&lt;p&gt;I had a &lt;code&gt;scheduledPostMargin&lt;/code&gt; config option in Astro that lets you set a future &lt;code&gt;pubDatetime&lt;/code&gt; and have the post hidden until that date arrives. The problem: this only works when the site rebuilds. My Cloudflare Pages deploys only triggered on git push. If I scheduled a post for Tuesday morning but didn&amp;#39;t push anything on Tuesday, it just wouldn&amp;#39;t appear.&lt;/p&gt;
&lt;p&gt;The fix was a GitHub Actions cron job. It runs &lt;code&gt;scripts/check-scheduled-posts.ts&lt;/code&gt; every 15 minutes, looks for posts with a &lt;code&gt;pubDatetime&lt;/code&gt; in the near future, and triggers a rebuild if it finds any. No wasted builds when nothing is scheduled.&lt;/p&gt;
&lt;h2&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;None of this makes for exciting screenshots. Dropdown menus, file renames, and cron jobs aren&amp;#39;t the kind of thing you show off. But this is the layer that makes a blog feel finished instead of &amp;quot;in progress.&amp;quot;&lt;/p&gt;
&lt;p&gt;This is part 4 of a series about redesigning this blog.&lt;/p&gt;
</content:encoded></item><item><title>Redesigning my blog, part 3: from stock theme to terminal blueprint</title><link>https://rezhajul.io/posts/redesigning-my-blog-terminal-blueprint/</link><guid isPermaLink="true">https://rezhajul.io/posts/redesigning-my-blog-terminal-blueprint/</guid><description>The big visual overhaul. New fonts, new colors, terminal hero, data pipeline cards, two-rail layout, and all the bugs that came with it.</description><pubDate>Sun, 15 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This blog was running stock AstroPaper. If you&amp;#39;ve seen one AstroPaper blog, you&amp;#39;ve seen them all: same layout, same colors, same Fira Code headings. I wanted something that looked like it belonged to me.&lt;/p&gt;
&lt;p&gt;I spent a Saturday morning looking at personal sites that had actual personality. &lt;a href=&quot;https://gwern.net&quot;&gt;gwern.net&lt;/a&gt; with its heavy typography and footnotes. &lt;a href=&quot;https://chriscoyier.net&quot;&gt;chriscoyier.net&lt;/a&gt; with its playful layout shifts. &lt;a href=&quot;https://cassidoo.co&quot;&gt;cassidoo.co&lt;/a&gt; with that confident color use. The common thread was that none of them looked like a template.&lt;/p&gt;
&lt;p&gt;I&amp;#39;m a data engineer. I live in terminals. I read logs for a living. So the concept I landed on was &amp;quot;Engineer&amp;#39;s Notebook / Terminal Blueprint.&amp;quot; The terminal and data pipeline metaphor felt honest to what I actually do every day.&lt;/p&gt;
&lt;h2&gt;Phase 1: Typography and color&lt;/h2&gt;
&lt;p&gt;The fastest way to break the stock look is fonts.&lt;/p&gt;
&lt;p&gt;I paired &lt;a href=&quot;https://www.jetbrains.com/lp/mono/&quot;&gt;JetBrains Mono&lt;/a&gt; for headings with &lt;a href=&quot;https://brailleinstitute.org/freefont&quot;&gt;Atkinson Hyperlegible&lt;/a&gt; for body text. Mono headings against a proportional body font immediately signals &amp;quot;this is a developer&amp;#39;s blog&amp;quot; without screaming about it. The contrast between the two faces does most of the heavy lifting.&lt;/p&gt;
&lt;p&gt;Color was next. The default AstroPaper palette is fine, but it&amp;#39;s generic. I added semantic color tokens to the CSS:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;:root {
  --surface: #f4f4f5;
  --accent-2: #0d9488;
}

[data-theme=&amp;quot;dark&amp;quot;] {
  --surface: #111827;
  --accent-2: #2dd4bf;
  --background: #0b1020;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;--surface&lt;/code&gt; is for card backgrounds, &lt;code&gt;--accent-2&lt;/code&gt; is a teal secondary color. That dark mode background at &lt;code&gt;#0b1020&lt;/code&gt; is way deeper than the default. It actually looks like a terminal now.&lt;/p&gt;
&lt;p&gt;Last thing: I set the prose measure to &lt;code&gt;max-width: 72ch&lt;/code&gt;. Long lines are hard to read and 72 characters is the sweet spot I kept coming back to.&lt;/p&gt;
&lt;h2&gt;Phase 1: Terminal hero&lt;/h2&gt;
&lt;p&gt;The stock AstroPaper hero is just a heading and a tagline. I replaced it with a terminal prompt:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rezha@jakarta:~$ abusing computers for fun &amp;amp; profit▍
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The blinking cursor is pure CSS:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;.cursor {
  display: inline-block;
  width: 0.5em;
  height: 1.1em;
  background: var(--accent);
  vertical-align: bottom;
  animation: blink 1s step-end infinite;
}

@keyframes blink {
  0%, 100% { opacity: 1; }
  50% { opacity: 0; }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Later I added a typewriter effect that loops the tagline. The animation types out the text character by character, pauses, deletes, and starts over. Getting the cursor to sit on the same baseline as the prompt text was annoying. The fix turned out to be &lt;code&gt;vertical-align: bottom&lt;/code&gt; on the cursor element. Without it, the cursor would jump a pixel or two above the text depending on the font metrics.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;.typewriter {
  display: inline-block;
  overflow: hidden;
  white-space: nowrap;
  border-right: none;
  animation:
    typing 3s steps(40) 1s forwards,
    pause 8s step-end infinite;
  max-width: 0;
}

.typewriter.active {
  max-width: 100%;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Phase 1: Data pipeline cards&lt;/h2&gt;
&lt;p&gt;I wanted the post list to look less like a list and more like a data pipeline. In &lt;code&gt;Card.astro&lt;/code&gt;, I added a vertical line running down the left side with a colored dot at each post. Think of it like a pipeline spine.&lt;/p&gt;
&lt;p&gt;Featured posts get an orange dot using &lt;code&gt;var(--accent)&lt;/code&gt;. Recent posts get teal with &lt;code&gt;var(--accent-2)&lt;/code&gt;. Everything else gets the muted default.&lt;/p&gt;
&lt;p&gt;I also reformatted the metadata line. Instead of the usual &amp;quot;January 15, 2026&amp;quot; format, it looks more like a system log: mono font, tiny text, and it includes the primary tag.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-astro&quot;&gt;&amp;lt;div class=&amp;quot;pipeline-node&amp;quot;&amp;gt;
  &amp;lt;span
    class=&amp;quot;dot&amp;quot;
    style={`background: ${featured ? &amp;#39;var(--accent)&amp;#39; : &amp;#39;var(--accent-2)&amp;#39;}`}
  /&amp;gt;
  &amp;lt;span class=&amp;quot;meta font-mono text-xs opacity-70&amp;quot;&amp;gt;
    {formattedDate} · {primaryTag}
  &amp;lt;/span&amp;gt;
&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The vertical line connecting the dots is just a &lt;code&gt;border-left: 2px solid var(--accent-2)&lt;/code&gt; on the container with some padding. Simple, but it completely changes how the page reads.&lt;/p&gt;
&lt;h2&gt;Phase 2: Two-rail homepage&lt;/h2&gt;
&lt;p&gt;The single-column layout felt wasteful on wide screens. I split the homepage into a two-rail grid:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;.home-grid {
  @apply lg:grid lg:grid-cols-[1fr_280px] lg:gap-10;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The main column has posts. The right sidebar (desktop only, collapses on mobile) has a few widgets: a &amp;quot;Now&amp;quot; status blurb, tech stack badges, and an RSS subscribe link.&lt;/p&gt;
&lt;p&gt;One layout detail that took way too long: the sidebar needed &lt;code&gt;pt-8&lt;/code&gt; to align visually with the hero section. I tried &lt;code&gt;self-start&lt;/code&gt; first because that seemed correct, but it broke sticky scrolling. The padding hack isn&amp;#39;t pretty, but it works.&lt;/p&gt;
&lt;h2&gt;Phase 2: Animated underlines&lt;/h2&gt;
&lt;p&gt;Default browser underlines are ugly and they cut through descenders. I replaced them with a &lt;code&gt;background-size&lt;/code&gt; animation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;a {
  text-decoration: none;
  background-image: linear-gradient(currentColor, currentColor);
  background-position: 0% 100%;
  background-repeat: no-repeat;
  background-size: 100% 1px;
  transition: background-size 0.2s ease;
}

a:hover {
  background-size: 100% 2px;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Links start with a 1px underline and thicken to 2px on hover. It&amp;#39;s a small detail but it makes the whole site feel more considered.&lt;/p&gt;
&lt;h2&gt;Phase 2: ETL pipeline SVG&lt;/h2&gt;
&lt;p&gt;I drew a small inline SVG diagram on the homepage showing an extract, transform, load flow. It was a nod to the data engineering theme. I actually removed it at one point because it felt like too much, then I got asked to bring it back. It sits centered with &lt;code&gt;mx-auto&lt;/code&gt; and ties the whole terminal/pipeline concept together.&lt;/p&gt;
&lt;h2&gt;Post-redesign bugs&lt;/h2&gt;
&lt;p&gt;Redesigns break things. Here&amp;#39;s what I found.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Search broke entirely.&lt;/strong&gt; Pagefind generates its index from the &lt;code&gt;dist&lt;/code&gt; folder at build time. The dev server doesn&amp;#39;t have a &lt;code&gt;dist&lt;/code&gt; folder. I kept refreshing the search page wondering why it was empty. Fix: run &lt;code&gt;bun run build&lt;/code&gt; first, then use the dev server. Obvious in hindsight.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Theme detection was wrong.&lt;/strong&gt; I had code checking &lt;code&gt;classList.contains(&amp;quot;dark&amp;quot;)&lt;/code&gt; but this site uses the &lt;code&gt;data-theme=&amp;quot;dark&amp;quot;&lt;/code&gt; attribute on the HTML element, not a class. The ARIA labels on the theme toggle were also backwards because of this.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Invalid scroll behavior.&lt;/strong&gt; I had &lt;code&gt;window.scrollTo({ behavior: &amp;quot;instant&amp;quot; })&lt;/code&gt; in a few places. &lt;code&gt;&amp;quot;instant&amp;quot;&lt;/code&gt; is not a valid value. The spec says &lt;code&gt;&amp;quot;auto&amp;quot;&lt;/code&gt; or &lt;code&gt;&amp;quot;smooth&amp;quot;&lt;/code&gt;. Changed them all to &lt;code&gt;&amp;quot;auto&amp;quot;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Scroll-smooth without motion preference.&lt;/strong&gt; I applied &lt;code&gt;scroll-smooth&lt;/code&gt; globally on the &lt;code&gt;html&lt;/code&gt; element. That&amp;#39;s bad for people who get motion sick. Moved it inside a media query:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;@media (prefers-reduced-motion: no-preference) {
  html {
    scroll-behavior: smooth;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Heading anchors lacked accessible names.&lt;/strong&gt; The auto-generated heading anchor links had no &lt;code&gt;aria-label&lt;/code&gt; or visible text for screen readers. Added &lt;code&gt;aria-label=&amp;quot;Link to this section&amp;quot;&lt;/code&gt; on each one.&lt;/p&gt;
&lt;h2&gt;What&amp;#39;s next&lt;/h2&gt;
&lt;p&gt;This was the biggest chunk of work in the whole redesign. The blog finally looks like my blog and not a demo site. There are still rough edges, but the identity is there.&lt;/p&gt;
&lt;p&gt;This is part 3 of a series about redesigning this blog.&lt;/p&gt;
</content:encoded></item><item><title>Redesigning my blog, part 2: the blogroll</title><link>https://rezhajul.io/posts/redesigning-my-blog-the-blogroll/</link><guid isPermaLink="true">https://rezhajul.io/posts/redesigning-my-blog-the-blogroll/</guid><description>I built a terminal-styled blogroll with ASCII art, pulsing RSS indicators, and OPML export. Blogrolls are back.</description><pubDate>Sat, 14 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Blogrolls used to be everywhere. Back in the early web, people would keep a sidebar full of links to other blogs they liked. It was how you discovered things. Then social media happened, algorithms took over discovery, and blogrolls mostly disappeared.&lt;/p&gt;
&lt;p&gt;The IndieWeb crowd has been pushing to bring them back, and I think they&amp;#39;re right. If you read someone&amp;#39;s blog and like it, why not tell your visitors about it? It costs nothing.&lt;/p&gt;
&lt;p&gt;I wanted a blogroll for this site, but I didn&amp;#39;t want a boring &lt;code&gt;&amp;lt;ul&amp;gt;&lt;/code&gt; of links. I wanted something with personality. So I went with a retro terminal/BBS look.&lt;/p&gt;
&lt;h2&gt;The data structure&lt;/h2&gt;
&lt;p&gt;Everything starts in &lt;code&gt;constants.ts&lt;/code&gt;. I defined a &lt;code&gt;BlogrollEntry&lt;/code&gt; type and an array of categories:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;export const BLOGROLL_CATEGORIES = [
  &amp;quot;indonesian&amp;quot;,
  &amp;quot;tech&amp;quot;,
  &amp;quot;indieweb&amp;quot;,
  &amp;quot;personal&amp;quot;,
  &amp;quot;tools&amp;quot;,
] as const;

export type BlogrollEntry = {
  name: string;
  url: string;
  feed?: string;
  description: string;
  category: (typeof BLOGROLL_CATEGORIES)[number];
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;feed&lt;/code&gt; field is optional. Not every blog advertises their RSS URL in the same place, so I can override it when needed. Otherwise the OPML export falls back to appending &lt;code&gt;/rss.xml&lt;/code&gt; to the URL.&lt;/p&gt;
&lt;p&gt;Adding a new blog is just pushing an object onto the &lt;code&gt;BLOGROLL&lt;/code&gt; array:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;{
  name: &amp;quot;Wayan Jimmy&amp;quot;,
  url: &amp;quot;https://blog.wayanjimmy.xyz&amp;quot;,
  description: &amp;quot;Code, life, and everything in between&amp;quot;,
  category: &amp;quot;indonesian&amp;quot;,
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;No CMS, no database. It&amp;#39;s an array in a TypeScript file.&lt;/p&gt;
&lt;h2&gt;The card component&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;BlogrollCard.astro&lt;/code&gt; is where the terminal aesthetic happens. Each card gets a few small visual details that make it feel like a BBS listing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;›&lt;/code&gt; prefix before the blog name, like a terminal prompt&lt;/li&gt;
&lt;li&gt;&lt;code&gt;//&lt;/code&gt; before the description, styled as a code comment&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;url:&lt;/code&gt; label showing the stripped domain&lt;/li&gt;
&lt;li&gt;A pulsing green dot next to blogs that have an RSS feed&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here&amp;#39;s the feed indicator, which I&amp;#39;m pretty happy with:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-astro&quot;&gt;{feed &amp;amp;&amp;amp; (
  &amp;lt;a href={feed} target=&amp;quot;_blank&amp;quot; rel=&amp;quot;noopener noreferrer&amp;quot;
     class=&amp;quot;flex items-center gap-1.5 font-mono text-xs text-foreground/50 hover:text-accent&amp;quot;
     aria-label={`RSS feed for ${name} (opens in new tab)`}&amp;gt;
    &amp;lt;span class=&amp;quot;relative flex h-2 w-2&amp;quot;&amp;gt;
      &amp;lt;span class=&amp;quot;absolute inline-flex h-full w-full motion-safe:animate-ping rounded-full bg-accent/75 opacity-75&amp;quot; /&amp;gt;
      &amp;lt;span class=&amp;quot;relative inline-flex h-2 w-2 rounded-full bg-accent&amp;quot; /&amp;gt;
    &amp;lt;/span&amp;gt;
    feed
  &amp;lt;/a&amp;gt;
)}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;motion-safe:animate-ping&lt;/code&gt; bit is worth noting. Tailwind&amp;#39;s &lt;code&gt;motion-safe:&lt;/code&gt; variant means the pulsing animation only runs if the user hasn&amp;#39;t enabled &amp;quot;reduce motion&amp;quot; in their OS settings. People with vestibular disorders don&amp;#39;t need a bunch of blinking dots on the page.&lt;/p&gt;
&lt;p&gt;Blog names are wrapped in &lt;code&gt;&amp;lt;h3&amp;gt;&lt;/code&gt; tags. This wasn&amp;#39;t my first instinct (I originally used plain &lt;code&gt;&amp;lt;a&amp;gt;&lt;/code&gt; tags), but it means screen reader users navigating by headings can jump between blogs. Small thing, big difference.&lt;/p&gt;
&lt;h2&gt;The page layout&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;blogroll.astro&lt;/code&gt; has an ASCII art header at the top:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;┌──────────────────────────────────────┐
│  BLOGROLL v1.0 :: NETWORK FEEDS      │
│  Status: ONLINE                      │
└──────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&amp;#39;s hidden on mobile with &lt;code&gt;hidden sm:block&lt;/code&gt; because ASCII art breaks completely on small screens. The narrow viewport just can&amp;#39;t fit the box characters without wrapping and turning the whole thing into garbage. Instead, mobile visitors get a plain-text fallback that reads &lt;code&gt;BLOGROLL v1.0 :: NETWORK FEEDS // Status: ONLINE&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Below the header, blogs are grouped by category. Each group is a &lt;code&gt;&amp;lt;section&amp;gt;&lt;/code&gt; with an &lt;code&gt;aria-labelledby&lt;/code&gt; pointing to the category heading. The cards themselves sit in a &lt;code&gt;&amp;lt;ul role=&amp;quot;list&amp;quot;&amp;gt;&lt;/code&gt; with each card in an &lt;code&gt;&amp;lt;li&amp;gt;&lt;/code&gt;. I know, it&amp;#39;s just a list of links, but the semantic structure means screen readers can announce &amp;quot;list, 9 items&amp;quot; when entering a category, which is actually useful.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-astro&quot;&gt;&amp;lt;ul role=&amp;quot;list&amp;quot; class=&amp;quot;grid gap-3 sm:grid-cols-2&amp;quot;&amp;gt;
  {blogs.map(blog =&amp;gt; (
    &amp;lt;li&amp;gt;&amp;lt;BlogrollCard {...blog} /&amp;gt;&amp;lt;/li&amp;gt;
  ))}
&amp;lt;/ul&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Two columns on desktop, one column on mobile. Nothing fancy.&lt;/p&gt;
&lt;h2&gt;OPML export&lt;/h2&gt;
&lt;p&gt;This was probably the most practical feature. &lt;code&gt;blogroll.opml.ts&lt;/code&gt; is an Astro API route that generates an OPML file, which is the standard format for exchanging RSS subscription lists.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;export const GET: APIRoute = () =&amp;gt; {
  const categories = [...new Set(BLOGROLL.map(b =&amp;gt; b.category))];

  const outlines = categories.map(category =&amp;gt; {
    const blogs = BLOGROLL.filter(b =&amp;gt; b.category === category);
    const blogOutlines = blogs.map(blog =&amp;gt; {
      const feedUrl = blog.feed || `${blog.url.replace(/\/$/, &amp;quot;&amp;quot;)}/rss.xml`;
      return `&amp;lt;outline type=&amp;quot;rss&amp;quot; text=&amp;quot;${escapeXml(blog.name)}&amp;quot; xmlUrl=&amp;quot;${escapeXml(feedUrl)}&amp;quot; htmlUrl=&amp;quot;${escapeXml(blog.url)}&amp;quot; /&amp;gt;`;
    }).join(&amp;quot;\n&amp;quot;);
    return `&amp;lt;outline text=&amp;quot;${category}&amp;quot;&amp;gt;\n${blogOutlines}\n&amp;lt;/outline&amp;gt;`;
  });
  // ... wrap in OPML boilerplate and return as XML
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Blogs get grouped by category in the output, just like on the page. If a blog entry doesn&amp;#39;t have an explicit &lt;code&gt;feed&lt;/code&gt; URL, it falls back to &lt;code&gt;${url}/rss.xml&lt;/code&gt;, which works for most blogs.&lt;/p&gt;
&lt;p&gt;There&amp;#39;s an &lt;code&gt;escapeXml&lt;/code&gt; helper that handles &lt;code&gt;&amp;amp;&lt;/code&gt;, &lt;code&gt;&amp;lt;&lt;/code&gt;, &lt;code&gt;&amp;gt;&lt;/code&gt;, quotes, and apostrophes. XML is picky about these things, and I&amp;#39;d rather not ship a broken file because someone has an ampersand in their blog name.&lt;/p&gt;
&lt;p&gt;The page has a download button at the bottom. Click it, save the file, and import it into Miniflux, NetNewsWire, Feedly, or whatever RSS reader you use.&lt;/p&gt;
&lt;h2&gt;Accessibility refinements&lt;/h2&gt;
&lt;p&gt;After the initial build, I went through the page with the keyboard and a screen reader. Found a few things I&amp;#39;d missed.&lt;/p&gt;
&lt;p&gt;External links didn&amp;#39;t have ARIA labels. Clicking &amp;quot;Wayan Jimmy&amp;quot; opens a new tab, and sighted users can see that from context, but a screen reader just announces the link text. I added &lt;code&gt;aria-label=&amp;quot;Visit ${name} (opens in new tab)&amp;quot;&lt;/code&gt; to every blog link so the behavior is clear.&lt;/p&gt;
&lt;p&gt;I also added &lt;code&gt;group-focus-within&lt;/code&gt; states to the cards. When you tab into a card, the left border accent and background change appear, matching the hover effect. Keyboard navigation should feel as intentional as mouse navigation.&lt;/p&gt;
&lt;h2&gt;Closing&lt;/h2&gt;
&lt;p&gt;A blogroll is a small feature. It took maybe a day to build. But I like what it does: it points people toward things I think are worth reading, and it does it in a way that feels like it belongs on a personal website. Not a recommendation algorithm, just a list I maintain by hand.&lt;/p&gt;
&lt;p&gt;This is part 2 of a series about redesigning this blog.&lt;/p&gt;
</content:encoded></item><item><title>Redesigning my blog, part 1: the audit</title><link>https://rezhajul.io/posts/redesigning-my-blog-the-audit/</link><guid isPermaLink="true">https://rezhajul.io/posts/redesigning-my-blog-the-audit/</guid><description>Before changing anything visual, I spent time auditing my AstroPaper blog for security, SEO, accessibility, and performance issues. Here&apos;s what I found.</description><pubDate>Fri, 13 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This blog runs on &lt;a href=&quot;https://github.com/satnaing/astro-paper&quot;&gt;AstroPaper&lt;/a&gt; v5 with Astro 5 and Tailwind v4. It looked fine. It worked. But it also looked like every other AstroPaper blog on the internet, and that bugged me.&lt;/p&gt;
&lt;p&gt;I wanted to redesign it. New layout, new typography, new everything. But before touching any CSS, I figured I should audit the codebase first. No point making something pretty if the foundation has cracks.&lt;/p&gt;
&lt;p&gt;So I spent a weekend going through the source, running Lighthouse, poking at edge cases. I found more issues than I expected.&lt;/p&gt;
&lt;h2&gt;Security&lt;/h2&gt;
&lt;p&gt;The share links component had &lt;code&gt;rel=&amp;quot;noreferrer&amp;quot;&lt;/code&gt; on external links but was missing &lt;code&gt;noopener&lt;/code&gt;. Modern browsers handle this fine, but older ones (looking at you, Safari on iOS 12) can still be exploited via &lt;code&gt;window.opener&lt;/code&gt;. Easy fix:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-astro&quot;&gt;&amp;lt;a href={shareUrl} target=&amp;quot;_blank&amp;quot; rel=&amp;quot;noopener noreferrer&amp;quot;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The search page had a worse problem. It was reading the query parameter straight from the URL and injecting it into the page without sanitization. Classic XSS vector. If someone crafted a URL with a script tag in the query string and shared it, the browser would execute it.&lt;/p&gt;
&lt;p&gt;Then there were the fonts. Google Fonts were loading from Google&amp;#39;s CDN, which means every visitor&amp;#39;s IP gets sent to Google. Not ideal for privacy. I switched to self-hosting via &lt;code&gt;@fontsource&lt;/code&gt; packages. Fonts load from my own domain now, no external requests.&lt;/p&gt;
&lt;h2&gt;SEO&lt;/h2&gt;
&lt;p&gt;This section had the most problems.&lt;/p&gt;
&lt;p&gt;First: trailing slashes. Some URLs ended with &lt;code&gt;/&lt;/code&gt;, some didn&amp;#39;t. Google sees &lt;code&gt;/posts/hello&lt;/code&gt; and &lt;code&gt;/posts/hello/&lt;/code&gt; as two different pages. I made sure internal links were consistent with trailing slashes, and considered adding &lt;code&gt;trailingSlash: &amp;quot;always&amp;quot;&lt;/code&gt; to the Astro config but held off since Cloudflare Pages handles redirects automatically.&lt;/p&gt;
&lt;p&gt;The canonical URLs were also messy. If someone shared a link with &lt;code&gt;?utm_source=twitter&lt;/code&gt; or whatever, that query string would end up in the &lt;code&gt;&amp;lt;link rel=&amp;quot;canonical&amp;quot;&amp;gt;&lt;/code&gt; and &lt;code&gt;og:url&lt;/code&gt; tags. Two different URLs pointing to the same content. I wrote a small &lt;code&gt;normalizeCanonical()&lt;/code&gt; helper in Layout.astro that strips query params and enforces the trailing slash:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;function normalizeCanonical(url: string): string {
  const u = new URL(url);
  u.search = &amp;quot;&amp;quot;;
  if (!u.pathname.endsWith(&amp;quot;/&amp;quot;)) {
    u.pathname += &amp;quot;/&amp;quot;;
  }
  return u.toString();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The theme toggle script was another headache. It was loaded via a relative path, which meant it worked on the homepage but 404&amp;#39;d on nested routes like &lt;code&gt;/posts/some-post/&lt;/code&gt;. The fix was bundling it as a regular Astro import:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-astro&quot;&gt;---
import &amp;quot;@/scripts/theme.ts&amp;quot;;
---
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Bundling through Astro&amp;#39;s import system gives you a path that works from any route, regardless of nesting depth.&lt;/p&gt;
&lt;p&gt;I also added &lt;code&gt;theme-color&lt;/code&gt; meta tags so mobile browsers show the correct color in the address bar for both light and dark mode:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;meta name=&amp;quot;theme-color&amp;quot; content=&amp;quot;#fff&amp;quot; media=&amp;quot;(prefers-color-scheme: light)&amp;quot; /&amp;gt;
&amp;lt;meta name=&amp;quot;theme-color&amp;quot; content=&amp;quot;#1a1a2e&amp;quot; media=&amp;quot;(prefers-color-scheme: dark)&amp;quot; /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And an &lt;code&gt;og:type&lt;/code&gt; meta tag was missing entirely. Small thing, but social media crawlers use it to decide how to render your link preview.&lt;/p&gt;
&lt;h2&gt;Accessibility&lt;/h2&gt;
&lt;p&gt;The theme toggle button had &lt;code&gt;aria-label=&amp;quot;auto&amp;quot;&lt;/code&gt; which is meaningless to a screen reader. &amp;quot;Auto&amp;quot; what? I changed it to describe the actual action, like &amp;quot;Switch to dark mode&amp;quot; or &amp;quot;Switch to light mode&amp;quot; depending on the current state.&lt;/p&gt;
&lt;p&gt;View transitions were eating focus state. When you navigate between pages, Astro&amp;#39;s view transitions swap the DOM, but they weren&amp;#39;t moving focus to the new content. A keyboard user would lose their place after every page change. I added a focus management handler in the &lt;code&gt;astro:after-swap&lt;/code&gt; event.&lt;/p&gt;
&lt;p&gt;The SVG icons in social and share link components were purely decorative but didn&amp;#39;t have &lt;code&gt;aria-hidden=&amp;quot;true&amp;quot;&lt;/code&gt;. Screen readers were trying to announce them, reading out gibberish path data. Quick fix across all the icon components:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-astro&quot;&gt;&amp;lt;svg aria-hidden=&amp;quot;true&amp;quot; ...&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;What I didn&amp;#39;t touch&lt;/h2&gt;
&lt;p&gt;I left the visual design completely alone for now. The colors, layout, typography, spacing: all stock AstroPaper. This post is about the stuff underneath.&lt;/p&gt;
&lt;h2&gt;What&amp;#39;s next&lt;/h2&gt;
&lt;p&gt;This is part 1 of a series about redesigning this blog. The boring stuff is done. Now the fun starts.&lt;/p&gt;
</content:encoded></item><item><title>Why I Left Zola&apos;s Simplicity for Astro&apos;s Power</title><link>https://rezhajul.io/posts/why-i-left-zola-simplicity-for-astro-power/</link><guid isPermaLink="true">https://rezhajul.io/posts/why-i-left-zola-simplicity-for-astro-power/</guid><description>I moved from Hugo to Zola for simplicity. Then I moved to Astro for flexibility. Here&apos;s why I traded a Rust binary for a Node_modules folder.</description><pubDate>Thu, 12 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A little over a year ago, I wrote about &lt;a href=&quot;/posts/migrating-to-zola&quot;&gt;migrating from Hugo to Zola&lt;/a&gt;. My reasoning back then was solid: I was tired of Go Templates&amp;#39; idiosyncrasies, and Zola&amp;#39;s Tera engine felt like home (hello, Django background!). Plus, Zola is written in Rust, builds fast, and ships as a single binary. What&amp;#39;s not to love?&lt;/p&gt;
&lt;p&gt;I thought Zola was the endgame. Simple, fast, opinionated.&lt;/p&gt;
&lt;p&gt;But here I am, writing this post on a blog built with &lt;a href=&quot;https://astro.build&quot;&gt;Astro&lt;/a&gt;. Yes, I traded a Rust binary for a &lt;code&gt;node_modules&lt;/code&gt; folder the size of a small black hole.&lt;/p&gt;
&lt;p&gt;Why? It wasn&amp;#39;t about speed. It was about &lt;em&gt;ceiling&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;The &amp;quot;Batteries Included&amp;quot; Trap&lt;/h2&gt;
&lt;p&gt;Zola is fantastic because it comes with everything you need built-in: syntax highlighting, search, image processing, Sass support. You don&amp;#39;t need plugins. In fact, you &lt;em&gt;can&amp;#39;t&lt;/em&gt; really have plugins.&lt;/p&gt;
&lt;p&gt;That&amp;#39;s the trade-off. When Zola does exactly what you want, it&amp;#39;s bliss. But the moment you want to step slightly outside its boundaries, you hit a wall.&lt;/p&gt;
&lt;p&gt;For example, I wanted to add a &lt;a href=&quot;/posts/over-engineering-a-clap-button-again&quot;&gt;clap button&lt;/a&gt; to my posts. In Zola, this meant manually wiring up client-side JavaScript, fighting with template variables to pass data to scripts, and ensuring it didn&amp;#39;t break on different page layouts. It felt like I was fighting the tool.&lt;/p&gt;
&lt;h2&gt;Enter Astro: The &amp;quot;Component&amp;quot; Model&lt;/h2&gt;
&lt;p&gt;I resisted Astro for a long time. &amp;quot;It&amp;#39;s JavaScript,&amp;quot; I told myself. &amp;quot;I don&amp;#39;t want a complex hydration process for a static blog.&amp;quot;&lt;/p&gt;
&lt;p&gt;But Astro&amp;#39;s &lt;a href=&quot;https://docs.astro.build/en/concepts/islands/&quot;&gt;Islands Architecture&lt;/a&gt; changed my mind. It sends zero JavaScript to the client by default. It&amp;#39;s just HTML generation, but with a DX that makes traditional template engines feel painful.&lt;/p&gt;
&lt;p&gt;Coming from a Django background, I used to love template inheritance (&lt;code&gt;{% block content %}&lt;/code&gt;). But after using Astro&amp;#39;s component-based model, I realized how much cleaner it is to compose a UI from small, reusable parts.&lt;/p&gt;
&lt;p&gt;Instead of a monolithic &lt;code&gt;base.html&lt;/code&gt; with fifty &lt;code&gt;{% if %}&lt;/code&gt; blocks, I have a &lt;code&gt;&amp;lt;Layout&amp;gt;&lt;/code&gt; component wrapping a &lt;code&gt;&amp;lt;Header&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;Main&amp;gt;&lt;/code&gt;, and &lt;code&gt;&amp;lt;Footer&amp;gt;&lt;/code&gt;. If I want a specific interactive widget on just one page? I import it and use it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-astro&quot;&gt;---
import ClapButton from &amp;#39;../components/ClapButton.astro&amp;#39;;
---

&amp;lt;Layout&amp;gt;
  &amp;lt;article&amp;gt;
    {/* Content */}
  &amp;lt;/article&amp;gt;
  &amp;lt;ClapButton client:visible /&amp;gt;
&amp;lt;/Layout&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That &lt;code&gt;client:visible&lt;/code&gt; directive is magic. It tells Astro: &amp;quot;Render this as static HTML, but hydrate it with JavaScript only when the user scrolls it into view.&amp;quot;&lt;/p&gt;
&lt;h2&gt;The Ecosystem Difference&lt;/h2&gt;
&lt;p&gt;Zola lives in a small corner of the Rust ecosystem. Astro sits on top of npm.&lt;/p&gt;
&lt;p&gt;Adding Tailwind CSS to Zola meant configuring a separate build step or relying on Zola&amp;#39;s Sass processing. In Astro? &lt;code&gt;npx astro add tailwind&lt;/code&gt;. Done.&lt;/p&gt;
&lt;p&gt;Generating dynamic Open Graph images for every post? In Zola, I&amp;#39;d have to write an external script to run at build time. In Astro, I wrote a &lt;a href=&quot;/posts/generating-dynamic-og-images-in-astro&quot;&gt;helper function&lt;/a&gt; using &lt;code&gt;satori&lt;/code&gt; and it just works.&lt;/p&gt;
&lt;p&gt;Yes, dealing with dependencies is annoying. But having access to the entire npm ecosystem means I stop reinventing things.&lt;/p&gt;
&lt;h2&gt;Is It Slower?&lt;/h2&gt;
&lt;p&gt;Technically, yes. Zola builds in milliseconds. Astro takes a few seconds because it has to spin up Node.js.&lt;/p&gt;
&lt;p&gt;Does it matter? For a blog with a few hundred posts, the difference is negligible in CI/CD. The time I save &lt;em&gt;developing&lt;/em&gt; features in Astro far outweighs the few extra seconds I wait for a deploy.&lt;/p&gt;
&lt;h2&gt;So Why Bother?&lt;/h2&gt;
&lt;p&gt;Zola is still great if you want a simple, fast, zero-dependency static site. I&amp;#39;d still pick it for documentation sites or quick landing pages.&lt;/p&gt;
&lt;p&gt;But for a personal blog where I keep wanting to bolt on weird interactive things, Astro gets out of my way. I get static HTML by default and JavaScript only where I ask for it.&lt;/p&gt;
&lt;p&gt;Is this the final migration? Of course not. But for now, I&amp;#39;m happy here.&lt;/p&gt;
</content:encoded></item><item><title>The Art of Yak Shaving</title><link>https://rezhajul.io/posts/yak-shaving-art-of-getting-lost/</link><guid isPermaLink="true">https://rezhajul.io/posts/yak-shaving-art-of-getting-lost/</guid><description>I tried to change a font. Three hours later I was reading Linux kernel docs.</description><pubDate>Wed, 11 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In engineering, there&amp;#39;s a term for when a simple task spirals into ten unrelated ones: &lt;em&gt;yak shaving&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Here&amp;#39;s what happened last Tuesday. I opened my laptop to change the font on my blog. I told myself it&amp;#39;d be 5 minutes.&lt;/p&gt;
&lt;p&gt;But the typography package was deprecated. So I updated it. The build broke because my Node.js version was too old. So I upgraded Node. That clashed with the server&amp;#39;s OS config. &lt;em&gt;&amp;quot;Might as well fix that too.&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;3 hours later I was reading the &lt;code&gt;cgroups&lt;/code&gt; docs in the Linux kernel manual at 2 AM. The font was still unchanged.&lt;/p&gt;
&lt;p&gt;Sometimes your entire day of &amp;quot;productivity&amp;quot; is just fixing problems you created for yourself. But you end up learning things you never would have looked up on purpose, so it&amp;#39;s not a total loss.&lt;/p&gt;
&lt;p&gt;If you were busy all day and feel like you got nothing done, welcome to the club. You were just shaving a yak.&lt;/p&gt;
</content:encoded></item><item><title>Own Nothing and Be Happy? I&apos;ll Pass.</title><link>https://rezhajul.io/posts/digital-hoarder-why-i-self-host/</link><guid isPermaLink="true">https://rezhajul.io/posts/digital-hoarder-why-i-self-host/</guid><description>Why I run a homelab and hoard my own data instead of trusting the cloud.</description><pubDate>Tue, 10 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It&amp;#39;s wild how easy it is to live without owning anything digitally. Music, movies, photos, work files, everything lives on someone else&amp;#39;s server, and we just pay for access.&lt;/p&gt;
&lt;p&gt;Convenient. Until your account gets locked, or the service decides to double the price overnight. I&amp;#39;ve had both happen.&lt;/p&gt;
&lt;p&gt;So yeah, I&amp;#39;m basically a digital hoarder. I run a mini PC at home as a server. My music is in FLAC on a NAS. My photos sit on a drive I can hold in my hand.&lt;/p&gt;
&lt;p&gt;Complicated? Definitely. The electricity bill is not pretty either.
But my data is actually mine. Not at some company&amp;#39;s mercy, not subject to terms of service I never read.&lt;/p&gt;
&lt;p&gt;Self-hosting is a lot of work for what most people get for free. But every time a service shuts down or changes its rules, I feel a little smug about it.&lt;/p&gt;
</content:encoded></item><item><title>Over-Engineering a Clap Button, Again</title><link>https://rezhajul.io/posts/over-engineering-a-clap-button-again/</link><guid isPermaLink="true">https://rezhajul.io/posts/over-engineering-a-clap-button-again/</guid><description>The clap button worked, but KV had a race condition. I migrated the backend to Cloudflare D1 for atomic writes and hashed IPs.</description><pubDate>Mon, 09 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In the &lt;a href=&quot;/posts/adding-interactive-claps-to-static-blog&quot;&gt;previous post&lt;/a&gt;, I built a clap button for this blog using Cloudflare Workers and KV. It worked. Mostly.&lt;/p&gt;
&lt;p&gt;The problem was concurrency. KV is an eventually consistent key-value store. When two people clap at the same time, both reads return the same value, both increment by one, and one clap disappears. Read-modify-write on KV is not atomic.&lt;/p&gt;
&lt;p&gt;For a blog that gets three readers on a good day, this probably didn&amp;#39;t matter. But it bugged me. I also wanted to stop storing raw IP addresses for rate limiting. So I migrated the backend to &lt;a href=&quot;https://developers.cloudflare.com/d1/&quot;&gt;Cloudflare D1&lt;/a&gt;, their SQLite-at-the-edge database.&lt;/p&gt;
&lt;p&gt;The code is on &lt;a href=&quot;https://github.com/rezhajulio/clap-backend&quot;&gt;GitHub&lt;/a&gt;. The frontend didn&amp;#39;t change at all.&lt;/p&gt;
&lt;h2&gt;The schema&lt;/h2&gt;
&lt;p&gt;D1 is just SQLite. I created two tables: one for clap counts, one for rate limiting.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;-- migrations/0001_init.sql
CREATE TABLE IF NOT EXISTS claps (
  slug TEXT PRIMARY KEY,
  count INTEGER NOT NULL DEFAULT 0,
  updated_at INTEGER NOT NULL
);

CREATE TABLE IF NOT EXISTS rate_limits (
  ip_hash TEXT NOT NULL,
  slug TEXT NOT NULL,
  window_start INTEGER NOT NULL,
  count INTEGER NOT NULL DEFAULT 0,
  updated_at INTEGER NOT NULL,
  PRIMARY KEY (ip_hash, slug, window_start)
);

CREATE INDEX IF NOT EXISTS idx_rate_limits_window_start
  ON rate_limits(window_start);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Two things to note. First, &lt;code&gt;ip_hash&lt;/code&gt; stores a SHA-256 hash of the IP address, salted with an environment variable. No raw IPs touch the database. Second, the composite primary key &lt;code&gt;(ip_hash, slug, window_start)&lt;/code&gt; means rate limits are scoped per user, per post, per hour window.&lt;/p&gt;
&lt;h2&gt;Atomic writes&lt;/h2&gt;
&lt;p&gt;This was the whole point of the migration. With KV, incrementing a counter required three steps: read the current value, add to it, write it back. If two requests overlap, one write clobbers the other.&lt;/p&gt;
&lt;p&gt;With D1, I use &lt;code&gt;ON CONFLICT ... DO UPDATE&lt;/code&gt; to make the whole thing a single statement. No read step, no race.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;// Rate limit + increment in a batch
const rateLimitStmt = c.env.DB.prepare(`
  INSERT INTO rate_limits (ip_hash, slug, window_start, count, updated_at)
  VALUES (?1, ?2, ?3, ?4, ?5)
  ON CONFLICT(ip_hash, slug, window_start) DO UPDATE SET
    count = rate_limits.count + excluded.count,
    updated_at = excluded.updated_at
  WHERE rate_limits.count + excluded.count &amp;lt;= ?6
`).bind(ipHash, slug, windowStart, incrementBy, now, MAX_CLAPS_PER_IP);

const clapsStmt = c.env.DB.prepare(`
  INSERT INTO claps (slug, count, updated_at)
  VALUES (?1, ?2, ?3)
  ON CONFLICT(slug) DO UPDATE SET
    count = claps.count + excluded.count,
    updated_at = excluded.updated_at
  RETURNING count
`).bind(slug, incrementBy, now);

const [rateLimitResult, clapsResult] = await c.env.DB.batch([
  rateLimitStmt,
  clapsStmt,
]);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;WHERE&lt;/code&gt; clause on the rate limit insert is doing double duty. If the user has already exceeded the limit, the insert silently does nothing (zero rows changed). I check &lt;code&gt;rateLimitResult.meta.changes&lt;/code&gt; after the batch and return a 429 if it&amp;#39;s zero. The clap count only goes up if the rate limit allows it, and it all happens in one round trip.&lt;/p&gt;
&lt;h2&gt;Hashing IPs&lt;/h2&gt;
&lt;p&gt;KV stored raw IPs as part of the rate limit key. That felt wrong. D1 gave me a reason to fix it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;async function sha256Hex(env, input) {
  const salt = env.IP_HASH_SALT || &amp;#39;&amp;#39;;
  const data = new TextEncoder().encode(salt + &amp;#39;:&amp;#39; + input);
  const hash = await crypto.subtle.digest(&amp;#39;SHA-256&amp;#39;, data);
  return [...new Uint8Array(hash)]
    .map(b =&amp;gt; b.toString(16).padStart(2, &amp;#39;0&amp;#39;))
    .join(&amp;#39;&amp;#39;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The salt is an environment variable in &lt;code&gt;wrangler.toml&lt;/code&gt;. Even if someone gets access to the database, they can&amp;#39;t reverse the IPs without the salt. It&amp;#39;s not bulletproof, but it&amp;#39;s a lot better than storing &lt;code&gt;192.168.1.42&lt;/code&gt; in plaintext.&lt;/p&gt;
&lt;h2&gt;Cleaning up expired rate limits&lt;/h2&gt;
&lt;p&gt;KV had a nice feature for this: &lt;code&gt;expirationTtl&lt;/code&gt;. Set a TTL when you write a key, and KV deletes it automatically. D1 doesn&amp;#39;t have TTL. Rows stick around until you delete them.&lt;/p&gt;
&lt;p&gt;I added a scheduled handler that runs once a day at midnight UTC. It deletes rate limit rows older than two hours in batches to avoid timeouts and keep D1 billing reasonable.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;export default {
  fetch: app.fetch,
  async scheduled(event, env) {
    const cutoff = Date.now() - (2 * 60 * 60 * 1000);
    const BATCH_SIZE = 1000;

    while (true) {
      const result = await env.DB.prepare(`
        DELETE FROM rate_limits
        WHERE rowid IN (
          SELECT rowid FROM rate_limits
          WHERE window_start &amp;lt; ?1
          ORDER BY window_start
          LIMIT ?2
        )
      `).bind(cutoff, BATCH_SIZE).run();

      if (result.meta.changes &amp;lt; BATCH_SIZE) break;
    }
  }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;ORDER BY window_start LIMIT ?2&lt;/code&gt; pattern is intentional. D1 charges per row read, so a batched subquery that uses the index on &lt;code&gt;window_start&lt;/code&gt; is cheaper than a blanket &lt;code&gt;DELETE WHERE window_start &amp;lt; cutoff&lt;/code&gt; that might scan the whole table.&lt;/p&gt;
&lt;h2&gt;Migrating the data&lt;/h2&gt;
&lt;p&gt;I had about 10 clap records in KV. Not exactly a big data migration, but I still wanted to do it properly.&lt;/p&gt;
&lt;p&gt;I created a temporary admin endpoint that iterated through all &lt;code&gt;claps:*&lt;/code&gt; keys in KV and upserted them into D1. Ran it once, verified the counts matched with &lt;code&gt;wrangler d1 execute&lt;/code&gt;, then deleted the endpoint and removed the KV binding.&lt;/p&gt;
&lt;h2&gt;The wrangler config&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;wrangler.toml&lt;/code&gt; changes are minimal. Swap the KV binding for a D1 binding and add a cron trigger.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-toml&quot;&gt;name = &amp;quot;clap-backend&amp;quot;
main = &amp;quot;worker.js&amp;quot;

[vars]
ALLOWED_ORIGINS = &amp;quot;https://rezhajul.io,http://localhost:2222&amp;quot;
MAX_CLAPS_PER_REQUEST = &amp;quot;10&amp;quot;
MAX_CLAPS_PER_IP = &amp;quot;50&amp;quot;

[[d1_databases]]
binding = &amp;quot;DB&amp;quot;
database_name = &amp;quot;clap-backend&amp;quot;
database_id = &amp;quot;YOUR_D1_DATABASE_ID_HERE&amp;quot;
migrations_dir = &amp;quot;migrations&amp;quot;

[triggers]
crons = [&amp;quot;0 0 * * *&amp;quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Was it worth it?&lt;/h2&gt;
&lt;p&gt;For a blog with my traffic, honestly, probably not. KV would have been fine for years. But the migration took about two hours, and now I don&amp;#39;t have to think about lost writes or storing raw IPs. D1&amp;#39;s free tier is generous enough that I&amp;#39;m not paying anything extra.&lt;/p&gt;
&lt;p&gt;If you&amp;#39;re building something similar and expect any real concurrency, skip KV and start with D1. The atomic writes alone are worth it.&lt;/p&gt;
</content:encoded></item><item><title>Auto-Publishing Scheduled Posts with GitHub Actions</title><link>https://rezhajul.io/posts/auto-publishing-scheduled-posts-with-github-actions/</link><guid isPermaLink="true">https://rezhajul.io/posts/auto-publishing-scheduled-posts-with-github-actions/</guid><description>My Astro blog already supports scheduled posts, but I still had to push to master to actually publish them. Here&apos;s how I fixed that with a cron job and a tiny Node script.</description><pubDate>Sun, 08 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;My blog has had &amp;quot;scheduled posts&amp;quot; for a while now. I set a &lt;code&gt;pubDatetime&lt;/code&gt; in the future, and Astro&amp;#39;s build process filters it out until that date passes. Simple enough.&lt;/p&gt;
&lt;p&gt;The problem: nothing actually triggers a rebuild when that date arrives. The post just sits there, technically publishable, until I push something to master. Which means I&amp;#39;m either pushing a dummy commit at noon or just forgetting about it entirely.&lt;/p&gt;
&lt;p&gt;I wanted a cron job that runs daily, checks if there&amp;#39;s anything new to publish, and deploys only when needed. No unnecessary builds, no wasted CI minutes.&lt;/p&gt;
&lt;h2&gt;How scheduled posts work here&lt;/h2&gt;
&lt;p&gt;Every post has a &lt;code&gt;pubDatetime&lt;/code&gt; in its frontmatter:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;pubDatetime: 2026-02-08T09:00:00+07:00
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At build time, a filter function checks whether that datetime has passed (with a 15-minute grace period):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;const postFilter = ({ data }: AnyPost) =&amp;gt; {
  const isPublishTimePassed =
    Date.now() &amp;gt;
    new Date(data.pubDatetime).getTime() - SITE.scheduledPostMargin;
  return !data.draft &amp;amp;&amp;amp; (import.meta.env.DEV || isPublishTimePassed);
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So the post exists in the repo but won&amp;#39;t appear on the site until a build happens after its &lt;code&gt;pubDatetime&lt;/code&gt;. The missing piece is triggering that build automatically.&lt;/p&gt;
&lt;h2&gt;The check script&lt;/h2&gt;
&lt;p&gt;I wrote a small script (&lt;code&gt;scripts/check-scheduled-posts.ts&lt;/code&gt;) that scans all markdown files in &lt;code&gt;src/data/blog&lt;/code&gt;, &lt;code&gt;src/data/notes&lt;/code&gt;, and &lt;code&gt;src/data/links&lt;/code&gt;. For each file, it parses the frontmatter, extracts &lt;code&gt;pubDatetime&lt;/code&gt; and &lt;code&gt;draft&lt;/code&gt;, and decides whether the post is &amp;quot;due&amp;quot;, meaning it&amp;#39;s publishable now but was created after the last successful deploy.&lt;/p&gt;
&lt;p&gt;The &amp;quot;last deploy&amp;quot; part is the interesting bit. On the first run there&amp;#39;s no cached timestamp, so everything publishable counts as due. After a successful deploy, the workflow saves the current UTC time to a file and caches it with &lt;code&gt;actions/cache/save&lt;/code&gt;. On subsequent runs, it restores that cache and only flags posts whose effective publish time falls after the last deploy.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const effectivePub = pub.getTime() - MARGIN_MS;
const publishableNow = now.getTime() &amp;gt;= effectivePub;

if (!publishableNow) continue;

if (lastDeploy) {
  const isNewSinceLastDeploy = pub.getTime() &amp;gt; lastDeploy.getTime();
  if (isNewSinceLastDeploy) {
    due.push(file);
  }
} else {
  due.push(file);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This approach handles missed runs gracefully. If GitHub Actions skips a day (outages happen), the next run still picks up yesterday&amp;#39;s post because the window extends back to the last successful deploy, not just &amp;quot;today.&amp;quot;&lt;/p&gt;
&lt;p&gt;The script outputs &lt;code&gt;should_deploy=true&lt;/code&gt; or &lt;code&gt;false&lt;/code&gt; as a GitHub Actions step output, along with the count and list of due files.&lt;/p&gt;
&lt;h2&gt;The workflow&lt;/h2&gt;
&lt;p&gt;The deploy workflow now has a &lt;code&gt;schedule&lt;/code&gt; trigger at &lt;code&gt;0 5 * * *&lt;/code&gt; (5 AM UTC, which is noon Jakarta time). It runs two jobs:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;check_scheduled_posts&lt;/strong&gt; restores the cached timestamp, runs the check script, and outputs whether a deploy is needed.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;deploy&lt;/strong&gt; is the actual build and deploy, gated by a conditional:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;if: &amp;gt;-
  ${{ github.event_name != &amp;#39;schedule&amp;#39; ||
      needs.check_scheduled_posts.outputs.should_deploy == &amp;#39;true&amp;#39; }}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Push, manual dispatch, and workflow_run triggers always deploy. The scheduled trigger only deploys when the check says there&amp;#39;s something new.&lt;/p&gt;
&lt;p&gt;After a successful deploy, it saves the timestamp:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- name: Save deploy timestamp
  run: date -u +&amp;quot;%Y-%m-%dT%H:%M:%SZ&amp;quot; &amp;gt; .last-deploy-timestamp

- name: Cache deploy timestamp
  uses: actions/cache/save@v4
  with:
    path: .last-deploy-timestamp
    key: last-deploy-ts-${{ github.run_id }}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using &lt;code&gt;github.run_id&lt;/code&gt; as part of the cache key ensures each deploy creates a new cache entry. The restore step uses &lt;code&gt;restore-keys: last-deploy-ts-&lt;/code&gt; to grab the most recent one.&lt;/p&gt;
&lt;h2&gt;Testing it locally&lt;/h2&gt;
&lt;p&gt;Running the script without a timestamp file simulates a first deploy. It flags every publishable post:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ bun run scripts/check-scheduled-posts.ts
Last deploy: (none, first run)
Posts due since last deploy: 368
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Writing a recent timestamp and running again shows it correctly narrows the window:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ echo &amp;quot;2026-02-06T04:00:00Z&amp;quot; &amp;gt; .last-deploy-timestamp
$ bun run scripts/check-scheduled-posts.ts
Last deploy: 2026-02-06T04:00:00.000Z
Posts due since last deploy: 1
  - src/data/blog/the-lazy-sysadmins-guide-to-docker.md
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And with a timestamp after all current posts:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ echo &amp;quot;2026-02-07T04:00:00Z&amp;quot; &amp;gt; .last-deploy-timestamp
$ bun run scripts/check-scheduled-posts.ts
Last deploy: 2026-02-07T04:00:00.000Z
Posts due since last deploy: 0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;No posts due, no deploy. Exactly what I wanted.&lt;/p&gt;
&lt;h2&gt;Why not just run the cron unconditionally?&lt;/h2&gt;
&lt;p&gt;I could skip the check entirely and just build every day at noon. The site is small, builds take under a minute, and Astro is fast. But I like the idea of my CI being quiet when there&amp;#39;s nothing to do. GitHub Actions has usage limits even for public repos, and I&amp;#39;d rather save those minutes for actual work.&lt;/p&gt;
&lt;p&gt;Plus the check script doubles as a useful debugging tool. I can run it locally to see what&amp;#39;s scheduled and when.&lt;/p&gt;
&lt;h2&gt;The full setup&lt;/h2&gt;
&lt;p&gt;Two files make this work. Since my repo is private, here they are in full.&lt;/p&gt;
&lt;h3&gt;The check script&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;scripts/check-scheduled-posts.ts&lt;/code&gt;, no dependencies, just Bun running TypeScript directly. It parses frontmatter with a simple regex, which works fine because my frontmatter is consistent. If yours isn&amp;#39;t, you might want to pull in &lt;code&gt;gray-matter&lt;/code&gt; or similar.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;import { readdir, readFile, appendFile } from &amp;quot;node:fs/promises&amp;quot;;
import { join } from &amp;quot;node:path&amp;quot;;

const TZ = &amp;quot;Asia/Jakarta&amp;quot;;
const MARGIN_MS = 15 * 60 * 1000;
const TIMESTAMP_FILE = &amp;quot;.last-deploy-timestamp&amp;quot;;

const ROOT_DIRS = [&amp;quot;src/data/blog&amp;quot;, &amp;quot;src/data/notes&amp;quot;, &amp;quot;src/data/links&amp;quot;];

interface Frontmatter {
  pubDatetime: string | null;
  draft: boolean;
}

function formatYMDInTZ(date: Date, timeZone: string): string {
  return new Intl.DateTimeFormat(&amp;quot;en-CA&amp;quot;, {
    timeZone,
    year: &amp;quot;numeric&amp;quot;,
    month: &amp;quot;2-digit&amp;quot;,
    day: &amp;quot;2-digit&amp;quot;,
  }).format(date);
}

function parseFrontmatter(raw: string): Frontmatter | null {
  if (!raw.startsWith(&amp;quot;---&amp;quot;)) return null;

  const lines = raw.split(/\r?\n/);
  if (lines[0].trim() !== &amp;quot;---&amp;quot;) return null;

  let end = -1;
  for (let i = 1; i &amp;lt; lines.length; i++) {
    if (lines[i].trim() === &amp;quot;---&amp;quot;) {
      end = i;
      break;
    }
  }
  if (end === -1) return null;

  const fmLines = lines.slice(1, end);

  const getScalar = (key: string): string | null =&amp;gt; {
    const re = new RegExp(`^\\s*${key}\\s*:\\s*(.+?)\\s*$`);
    for (const line of fmLines) {
      const m = line.match(re);
      if (m) {
        let v = m[1].trim();
        if (
          (v.startsWith(&amp;#39;&amp;quot;&amp;#39;) &amp;amp;&amp;amp; v.endsWith(&amp;#39;&amp;quot;&amp;#39;)) ||
          (v.startsWith(&amp;quot;&amp;#39;&amp;quot;) &amp;amp;&amp;amp; v.endsWith(&amp;quot;&amp;#39;&amp;quot;))
        ) {
          v = v.slice(1, -1);
        }
        return v;
      }
    }
    return null;
  };

  const pubDatetime = getScalar(&amp;quot;pubDatetime&amp;quot;);
  const draftRaw = getScalar(&amp;quot;draft&amp;quot;);
  const draft = draftRaw ? draftRaw.toLowerCase() === &amp;quot;true&amp;quot; : false;

  return { pubDatetime, draft };
}

async function listMarkdownFiles(dir: string): Promise&amp;lt;string[]&amp;gt; {
  let entries;
  try {
    entries = await readdir(dir, { withFileTypes: true });
  } catch {
    return [];
  }
  return entries
    .filter(e =&amp;gt; e.isFile() &amp;amp;&amp;amp; e.name.endsWith(&amp;quot;.md&amp;quot;))
    .map(e =&amp;gt; join(dir, e.name));
}

async function getLastDeployTime(): Promise&amp;lt;Date | null&amp;gt; {
  try {
    const raw = await readFile(TIMESTAMP_FILE, &amp;quot;utf8&amp;quot;);
    const ts = new Date(raw.trim());
    if (!Number.isNaN(ts.getTime())) return ts;
  } catch {
    // no cached timestamp
  }
  return null;
}

async function setOutput(key: string, value: string): Promise&amp;lt;void&amp;gt; {
  const outPath = process.env.GITHUB_OUTPUT;
  const line = `${key}=${value}\n`;
  if (outPath) {
    await appendFile(outPath, line);
    return;
  }
  process.stdout.write(line);
}

async function main() {
  const now = new Date();
  const lastDeploy = await getLastDeployTime();

  const allFiles = (
    await Promise.all(ROOT_DIRS.map(d =&amp;gt; listMarkdownFiles(d)))
  ).flat();

  const due: string[] = [];

  for (const file of allFiles) {
    const raw = await readFile(file, &amp;quot;utf8&amp;quot;);
    const fm = parseFrontmatter(raw);
    if (!fm || !fm.pubDatetime) continue;
    if (fm.draft) continue;

    const pub = new Date(fm.pubDatetime);
    if (Number.isNaN(pub.getTime())) continue;

    const effectivePub = pub.getTime() - MARGIN_MS;
    const publishableNow = now.getTime() &amp;gt;= effectivePub;

    if (!publishableNow) continue;

    if (lastDeploy) {
      const isNewSinceLastDeploy = pub.getTime() &amp;gt; lastDeploy.getTime();
      if (isNewSinceLastDeploy) {
        due.push(file);
      }
    } else {
      due.push(file);
    }
  }

  const shouldDeploy = due.length &amp;gt; 0;

  await setOutput(&amp;quot;should_deploy&amp;quot;, shouldDeploy ? &amp;quot;true&amp;quot; : &amp;quot;false&amp;quot;);
  await setOutput(&amp;quot;due_count&amp;quot;, String(due.length));
  await setOutput(&amp;quot;due_files&amp;quot;, due.join(&amp;quot;,&amp;quot;));

  console.log(
    [
      `Now (UTC): ${now.toISOString()}`,
      `Today (Asia/Jakarta): ${formatYMDInTZ(now, TZ)}`,
      `Last deploy: ${lastDeploy ? lastDeploy.toISOString() : &amp;quot;(none, first run)&amp;quot;}`,
      `Posts due since last deploy: ${due.length}`,
      ...(due.length ? due.map(f =&amp;gt; `  - ${f}`) : []),
    ].join(&amp;quot;\n&amp;quot;)
  );
}

main().catch(err =&amp;gt; {
  console.error(err);
  process.exit(1);
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;The workflow&lt;/h3&gt;
&lt;p&gt;The relevant parts of &lt;code&gt;.github/workflows/deploy.yml&lt;/code&gt;. Your deploy step will look different depending on your hosting setup -- I use SSH/rsync to a VPS, but Cloudflare Pages, Vercel, or Netlify would work the same way with their respective deploy actions.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;name: Deploy Astro Site

on:
  workflow_dispatch:
  push:
    branches:
      - master
  schedule:
    # 12:00 Asia/Jakarta (UTC+7) == 05:00 UTC
    - cron: &amp;quot;0 5 * * *&amp;quot;

concurrency:
  group: deploy-astro-site
  cancel-in-progress: false

jobs:
  check_scheduled_posts:
    name: Check scheduled posts
    runs-on: ubuntu-latest
    outputs:
      should_deploy: ${{ steps.check.outputs.should_deploy }}
      due_count: ${{ steps.check.outputs.due_count }}
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Setup Bun
        uses: oven-sh/setup-bun@v1
        with:
          bun-version: latest

      - name: Restore last deploy timestamp
        uses: actions/cache/restore@v4
        with:
          path: .last-deploy-timestamp
          key: last-deploy-ts-
          restore-keys: |
            last-deploy-ts-

      - id: check
        name: Check for posts due since last deploy
        run: bun run scripts/check-scheduled-posts.ts

  deploy:
    name: Build and deploy
    runs-on: ubuntu-latest
    needs: [check_scheduled_posts]
    if: &amp;gt;-
      ${{ github.event_name != &amp;#39;schedule&amp;#39; ||
          needs.check_scheduled_posts.outputs.should_deploy == &amp;#39;true&amp;#39; }}
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Setup Bun
        uses: oven-sh/setup-bun@v1
        with:
          bun-version: latest

      - name: Install Dependencies
        run: bun install

      - name: Build Astro Site
        run: bun run build

      # Replace this with your own deploy step
      - name: Deploy to Server
        run: echo &amp;quot;Deploy your site here&amp;quot;

      - name: Save deploy timestamp
        run: date -u +&amp;quot;%Y-%m-%dT%H:%M:%SZ&amp;quot; &amp;gt; .last-deploy-timestamp

      - name: Cache deploy timestamp
        uses: actions/cache/save@v4
        with:
          path: .last-deploy-timestamp
          key: last-deploy-ts-${{ github.run_id }}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now I can write a post, set the date, and forget about it. The blog takes care of the rest at noon.&lt;/p&gt;
</content:encoded></item><item><title>The Lazy Sysadmin&apos;s Guide to Docker Maintenance</title><link>https://rezhajul.io/posts/the-lazy-sysadmins-guide-to-docker/</link><guid isPermaLink="true">https://rezhajul.io/posts/the-lazy-sysadmins-guide-to-docker/</guid><description>How I sleep soundly while my server updates itself. A guide to Watchtower and Telegram notifications.</description><pubDate>Sat, 07 Feb 2026 01:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Manually running &lt;code&gt;docker compose pull&lt;/code&gt; and &lt;code&gt;docker compose up -d&lt;/code&gt; every time an update drops is exhausting. We have better things to do.&lt;/p&gt;
&lt;p&gt;But ignoring updates isn&amp;#39;t an option either. Security patches matter. New features are nice.&lt;/p&gt;
&lt;p&gt;So I built a lazy stack: Watchtower + Telegram notifications. My homelab updates itself at 4 AM and tells me what happened when I wake up.&lt;/p&gt;
&lt;h2&gt;The tool: Watchtower&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://containrrr.dev/watchtower/&quot;&gt;Watchtower&lt;/a&gt; automates Docker container base image updates. It checks for new images, pulls them, and gracefully restarts your containers with the exact same options you used to deploy them.&lt;/p&gt;
&lt;h2&gt;The configuration&lt;/h2&gt;
&lt;p&gt;I use &lt;code&gt;docker-compose&lt;/code&gt;. Clean, reproducible, easy to backup.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;services:
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /etc/localtime:/etc/localtime:ro
    environment:
      # Use a recent API version to avoid &amp;quot;client version too old&amp;quot; errors on Arch/Modern Docker
      - DOCKER_API_VERSION=1.45
      
      # Clean up old images after update to save disk space
      - WATCHTOWER_CLEANUP=true
      
      # Schedule it! (Cron format: Seconds Minutes Hours Day Month Weekday)
      # This runs at 04:00 AM every day.
      - WATCHTOWER_SCHEDULE=0 0 4 * * *
      
      # Silence the startup banner in logs
      - WATCHTOWER_NO_STARTUP_MESSAGE=true
      
      # NOTIFICATIONS (The fun part)
      - WATCHTOWER_NOTIFICATIONS=shoutrrr
      - WATCHTOWER_NOTIFICATION_URL=telegram://YOUR_BOT_TOKEN@telegram?channels=YOUR_CHAT_ID
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Breaking down the config&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduling (&lt;code&gt;WATCHTOWER_SCHEDULE&lt;/code&gt;)&lt;/strong&gt;:
I set it to &lt;code&gt;0 0 4 * * *&lt;/code&gt;. Why 4 AM? I&amp;#39;m asleep, and if something breaks, I won&amp;#39;t notice until morning anyway. Internet traffic is also low.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cleanup (&lt;code&gt;WATCHTOWER_CLEANUP&lt;/code&gt;)&lt;/strong&gt;:
Removes the old image after pulling the new one. No more &lt;code&gt;docker system prune&lt;/code&gt; panic when your disk hits 100%.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Version (&lt;code&gt;DOCKER_API_VERSION&lt;/code&gt;)&lt;/strong&gt;:
If you&amp;#39;re on a bleeding-edge distro like Arch, Watchtower might complain that its client is too old. Setting the version (e.g., &lt;code&gt;1.45&lt;/code&gt;) fixes this.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Setting up notifications&lt;/h2&gt;
&lt;p&gt;Updates are great, but silent updates are scary. I want to know what happened.&lt;/p&gt;
&lt;p&gt;Watchtower supports &lt;a href=&quot;https://containrrr.dev/shoutrrr/&quot;&gt;Shoutrrr&lt;/a&gt;, which connects to basically everything (Discord, Telegram, Slack, Email, Gotify, etc.).&lt;/p&gt;
&lt;p&gt;For Telegram:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a bot with &lt;a href=&quot;https://t.me/BotFather&quot;&gt;@BotFather&lt;/a&gt; to get a token.&lt;/li&gt;
&lt;li&gt;Get your Chat ID (use &lt;code&gt;@userinfobot&lt;/code&gt; or similar).&lt;/li&gt;
&lt;li&gt;Format the URL: &lt;code&gt;telegram://TOKEN@telegram?channels=CHAT_ID&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now every morning I wake up to a message like:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Found new image for container &lt;code&gt;my-app&lt;/code&gt;... Updated!&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This setup takes 5 minutes and saves hours of manual work over a year.&lt;/p&gt;
&lt;p&gt;For mission-critical databases, pin your versions. But for typical homelab services (Plex, *arr apps, simple web servers), it works fine.&lt;/p&gt;
</content:encoded></item><item><title>You Can&apos;t Stack Overflow a Deadlift</title><link>https://rezhajul.io/posts/you-cant-stack-overflow-a-deadlift/</link><guid isPermaLink="true">https://rezhajul.io/posts/you-cant-stack-overflow-a-deadlift/</guid><description>I recently started going to the gym. Being a beginner again after years of being a &apos;Senior&apos; Engineer was a humbling reality check.</description><pubDate>Fri, 06 Feb 2026 00:52:00 GMT</pubDate><content:encoded>&lt;p&gt;I&amp;#39;m comfortable in a terminal. I debug race conditions. I have strong opinions about system architecture, probably wrong ones. In tech, I&amp;#39;m a &amp;quot;Senior.&amp;quot;&lt;/p&gt;
&lt;p&gt;Then I walked into a gym last week. Wasn&amp;#39;t a Senior anymore.&lt;/p&gt;
&lt;p&gt;I stood there staring at a machine, trying to figure out which way to sit on it, hoping no one noticed. Shaking under an empty bar. Total noob.&lt;/p&gt;
&lt;p&gt;Best refresher course on software engineering I&amp;#39;ve had in years.&lt;/p&gt;
&lt;h2&gt;Looking stupid&lt;/h2&gt;
&lt;p&gt;The free weights section is terrifying. Felt exactly like my first day as a junior dev pushing code to production. Everyone else looks like they know what they&amp;#39;re doing. You feel watched.&lt;/p&gt;
&lt;p&gt;We tell juniors &amp;quot;just ask questions,&amp;quot; but we forget how paralyzing it feels to ask a &amp;quot;stupid&amp;quot; one.&lt;/p&gt;
&lt;p&gt;I had to swallow my pride and ask a trainer how to do a proper squat. If I hadn&amp;#39;t, I&amp;#39;d have hurt myself. Same thing happens in code. If juniors don&amp;#39;t ask, they hurt the codebase. Being a beginner again reminded me to go easier on the new folks. We were all shaking under the empty bar once.&lt;/p&gt;
&lt;h2&gt;You can&amp;#39;t copy-paste strength&lt;/h2&gt;
&lt;p&gt;I&amp;#39;m used to shortcuts. Don&amp;#39;t know how to center a div? Google it. Copy-paste. Move on.&lt;/p&gt;
&lt;p&gt;At the gym, I watched a YouTube video on &amp;quot;best chest workout&amp;quot; and thought I had it figured out. Then I got under the bar. Gravity doesn&amp;#39;t care about your theoretical knowledge.&lt;/p&gt;
&lt;p&gt;You can&amp;#39;t copy-paste strength. Can&amp;#39;t ChatGPT your way to a personal record.&lt;/p&gt;
&lt;p&gt;Made me realize how much of modern development is gluing things together versus actually understanding them. Lifting forced me to &lt;em&gt;feel&lt;/em&gt; the mechanics, not just memorize output. Made me want to go back and understand how my tools work, not just how to use them.&lt;/p&gt;
&lt;h2&gt;Consistency beats motivation&lt;/h2&gt;
&lt;p&gt;Coding has flow states. You blink and four hours are gone.&lt;/p&gt;
&lt;p&gt;The gym hasn&amp;#39;t been fun yet. My muscles ache. I&amp;#39;m tired. It sucks. I go anyway.&lt;/p&gt;
&lt;p&gt;The most valuable code isn&amp;#39;t written during a manic 3am burst. It&amp;#39;s written on a Tuesday afternoon when you&amp;#39;re bored but you write the unit tests anyway.&lt;/p&gt;
&lt;p&gt;The gym teaches the same thing. Forget the highlight reel. Results don&amp;#39;t come from one intense session. They come from showing up when you don&amp;#39;t want to.&lt;/p&gt;
&lt;h2&gt;Ego lifting&lt;/h2&gt;
&lt;p&gt;The guy next to me is curling my squat weight like it&amp;#39;s styrofoam. I want to grab the heavier dumbbells. That&amp;#39;s ego lifting. Do that and I tear a muscle.&lt;/p&gt;
&lt;p&gt;Coding has the same trap. Skip docs because &amp;quot;I&amp;#39;ll figure it out.&amp;quot; Skip tests because &amp;quot;it&amp;#39;s simple.&amp;quot;&lt;/p&gt;
&lt;p&gt;Being physically weak forced me to respect the process. Start light. Check form. Rest. Thinking you&amp;#39;re too smart for the basics is the fastest way to break production.&lt;/p&gt;
&lt;p&gt;Being a gym noob stripped away the ego I built up in my career. Growth happens when you&amp;#39;re shaking under a barbell or staring at a panic log.&lt;/p&gt;
&lt;p&gt;So I keep showing up. Keep lifting the light weights. Maybe the code gets stronger too.&lt;/p&gt;
</content:encoded></item><item><title>Why I Moved My AI Brain to a Homelab</title><link>https://rezhajul.io/posts/why-i-moved-my-ai-brain-to-homelab/</link><guid isPermaLink="true">https://rezhajul.io/posts/why-i-moved-my-ai-brain-to-homelab/</guid><description>From a monthly VPS bill to a powerful local server. Why I migrated my AI assistant (OpenClaw) to my homelab.</description><pubDate>Thu, 05 Feb 2026 08:00:00 GMT</pubDate><content:encoded>&lt;p&gt;For the last year, my AI assistant (Cici, running on OpenClaw) lived in the cloud. She was hosted on a decent VPS in Singapore. It worked fine. I paid my monthly bill, I SSH&amp;#39;d in when I needed to, and she was always there.&lt;/p&gt;
&lt;p&gt;But last weekend, I killed the server.&lt;/p&gt;
&lt;p&gt;I migrated everything to a physical box sitting in my home rack.&lt;/p&gt;
&lt;p&gt;Why on earth would I move from a reliable datacenter to a DIY box at home?&lt;/p&gt;
&lt;h2&gt;1. The cost vs. power ratio&lt;/h2&gt;
&lt;p&gt;Let&amp;#39;s talk specs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;My VPS ($10/month):&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2 vCPU&lt;/li&gt;
&lt;li&gt;4GB RAM&lt;/li&gt;
&lt;li&gt;80GB SSD&lt;/li&gt;
&lt;li&gt;Shared resources (noisy neighbors)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;My Homelab:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Intel Core i5 (Gen 7)&lt;/li&gt;
&lt;li&gt;16GB RAM&lt;/li&gt;
&lt;li&gt;256GB SSD (System)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;20TB HDD&lt;/strong&gt; (Data/Media)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the cloud, storage and RAM are expensive. Running OpenClaw + Docker containers on 4GB RAM was a constant juggling act. OOM (Out of Memory) kills were my daily breakfast.&lt;/p&gt;
&lt;p&gt;On the local machine? I have 16GB RAM and plenty of storage. I can keep multiple Docker containers running without OOM kills, and Cici now has access to &lt;strong&gt;20TB of storage&lt;/strong&gt;. She can manage my media library, process backups, and hoard data without me worrying about block storage fees.&lt;/p&gt;
&lt;h2&gt;2. Latency&lt;/h2&gt;
&lt;p&gt;My VPS ping was ~20ms. Not bad. But my server is now on my local LAN.&lt;/p&gt;
&lt;p&gt;The latency is &lt;strong&gt;sub-1ms&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;When I&amp;#39;m coding and asking Cici to &lt;code&gt;grep&lt;/code&gt; a massive codebase or perform file operations, that speed difference adds up. It feels snappy. It feels &lt;em&gt;local&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;3. Data sovereignty (and paranoia)&lt;/h2&gt;
&lt;p&gt;I know exactly where my data lives. Cici&amp;#39;s memory files (&lt;code&gt;MEMORY.md&lt;/code&gt;), my personal logs, my API keys. They aren&amp;#39;t on someone else&amp;#39;s computer anymore. They are on a drive I can physically pull out.&lt;/p&gt;
&lt;p&gt;If the internet goes down, my assistant doesn&amp;#39;t disappear (mostly). She can still control my local smart home, organize my files, and run local scripts.&lt;/p&gt;
&lt;h2&gt;4. The tinkering factor&lt;/h2&gt;
&lt;p&gt;I&amp;#39;ll be honest: part of this is just the joy of tinkering.&lt;/p&gt;
&lt;p&gt;On a VPS, you SSH in, do your thing, log out. It&amp;#39;s sterile. There&amp;#39;s no personality to it.&lt;/p&gt;
&lt;p&gt;With a homelab, I get to hear the fans spin up when Cici is processing something heavy. I can walk over and check the blinking lights. When something breaks, I physically reseat a cable or swap a drive. It&amp;#39;s tactile. It&amp;#39;s &lt;em&gt;mine&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Running your own server also means dealing with things a VPS abstracts away. Power outages, UPS battery health, fan curves. It&amp;#39;s a different kind of maintenance.&lt;/p&gt;
&lt;h2&gt;The migration process&lt;/h2&gt;
&lt;p&gt;It wasn&amp;#39;t smooth.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;OS:&lt;/strong&gt; I installed &lt;strong&gt;Arch Linux&lt;/strong&gt;. (Btw, I use Arch). It gives me minimal bloat and the latest kernels.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Networking:&lt;/strong&gt; I use &lt;strong&gt;Tailscale&lt;/strong&gt; to access Cici from anywhere. Whether I&amp;#39;m at a coffee shop or in bed, I have a secure, direct tunnel to my home server despite the CGNAT.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Backup:&lt;/strong&gt; I used &lt;code&gt;rsync&lt;/code&gt; to pull Cici&amp;#39;s &amp;quot;brain&amp;quot; (her &lt;code&gt;clawd&lt;/code&gt; directory) from the VPS. She woke up on the new machine, read her &lt;code&gt;MEMORY.md&lt;/code&gt;, and realized she had moved.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;The downsides&lt;/h2&gt;
&lt;p&gt;There are trade-offs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Power bill.&lt;/strong&gt; The server draws about 10W idle (I measured with a kill-a-watt). That&amp;#39;s roughly 7 kWh/month, which adds maybe $1-2 to my electricity bill. Way cheaper than the VPS.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Uptime is my problem now.&lt;/strong&gt; When the VPS went down, I filed a ticket and waited. When my homelab goes down, I&amp;#39;m the one crawling behind the rack at 2am. Last week the power flickered and I had to reconfigure my UPS settings.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ISP drama.&lt;/strong&gt; My ISP gives me CGNAT, which means no direct inbound connections. Tailscale solves this for me, but if you want to host public services, you&amp;#39;ll need a VPS as a tunnel endpoint anyway. Ironic.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Noise and heat.&lt;/strong&gt; The server sits on my desk. The fans are quiet most of the time, but when Cici is processing something heavy, I hear it. It&amp;#39;s not annoying, just present.&lt;/p&gt;
&lt;p&gt;Cloud is convenient. But running your own hardware changes things.&lt;/p&gt;
&lt;p&gt;The homelab gives my AI assistant more room to grow. She&amp;#39;s faster, has text embeddings stored locally, and 20TB of storage to play with.&lt;/p&gt;
&lt;p&gt;If you have an old PC collecting dust, try self-hosting your agent. Yesterday Cici reminded me she could now access my entire photo archive. That wouldn&amp;#39;t have happened on a 80GB VPS.&lt;/p&gt;
</content:encoded></item><item><title>Bringing Interactivity to a Static Blog: The Clap Button</title><link>https://rezhajul.io/posts/adding-interactive-claps-to-static-blog/</link><guid isPermaLink="true">https://rezhajul.io/posts/adding-interactive-claps-to-static-blog/</guid><description>Static sites are fast but lonely. I added a Medium-style Clap button using Astro, Preact, and Cloudflare Workers to fix that.</description><pubDate>Wed, 04 Feb 2026 02:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I love my Astro blog. It&amp;#39;s fast, cheap to host, and I don&amp;#39;t have to worry about security patches. But it&amp;#39;s also lonely. When I publish something, I have no idea if anyone actually read it, let alone liked it.&lt;/p&gt;
&lt;p&gt;I didn&amp;#39;t want to install a heavy comment system like Disqus or utteranc.es just yet. I wanted something low-friction. The &amp;quot;Clap&amp;quot; button on Medium is perfect for this—it&amp;#39;s anonymous, satisfying to click, and gives me just enough signal to know if a post landed.&lt;/p&gt;
&lt;p&gt;So I built one. The full code is on &lt;a href=&quot;https://github.com/rezhajulio/clap-backend&quot;&gt;GitHub&lt;/a&gt; if you want to deploy your own.&lt;/p&gt;
&lt;p&gt;The stack:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Backend: Cloudflare Workers (Hono) + KV Storage&lt;/li&gt;
&lt;li&gt;Frontend: Preact (Astro Island)&lt;/li&gt;
&lt;li&gt;State: Optimistic updates + Debouncing&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The backend (Cloudflare Workers)&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;The code below is simplified for clarity. See the &lt;a href=&quot;https://github.com/rezhajulio/clap-backend&quot;&gt;full implementation&lt;/a&gt; for slug validation, proper rate limiting, and configurable CORS.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I needed a place to store the counters. Cloudflare KV works well here—eventual consistency is fine for likes, and reads are fast.&lt;/p&gt;
&lt;p&gt;I wrote a small API using &lt;a href=&quot;https://hono.dev&quot;&gt;Hono&lt;/a&gt;. It has two jobs: get the count, and increment it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;// worker.js
import { Hono } from &amp;#39;hono&amp;#39;;
import { cors } from &amp;#39;hono/cors&amp;#39;;

const app = new Hono();
const MAX_CLAPS_PER_IP = 50;

// Security: Lock this down to your domain in production
app.use(&amp;#39;/*&amp;#39;, cors({
  origin: (origin) =&amp;gt; origin.endsWith(&amp;#39;rezhajul.io&amp;#39;) ? origin : null,
}));

app.post(&amp;#39;/claps/:slug&amp;#39;, async (c) =&amp;gt; {
  const { slug } = c.req.param();
  const ip = c.req.header(&amp;#39;CF-Connecting-IP&amp;#39;);
  
  // 1. Rate Limiting (Simplest implementation)
  const rlKey = `rl:${ip}:${slug}`;
  const rlData = await c.env.BLOG_CLAPS.get(rlKey, { type: &amp;#39;json&amp;#39; }) || { count: 0 };
  
  const body = await c.req.json().catch(() =&amp;gt; ({ count: 1 }));
  const increment = Math.min(body.count || 1, 10); // Batch limit

  if (rlData.count + increment &amp;gt; MAX_CLAPS_PER_IP) {
    return c.json({ error: &amp;#39;Too many claps&amp;#39; }, 429);
  }

  // 2. Update Rate Limit
  rlData.count += increment;
  await c.env.BLOG_CLAPS.put(rlKey, JSON.stringify(rlData), { expirationTtl: 3600 }); // 1 hour TTL

  // 3. Update Global Count
  const key = `claps:${slug}`;
  const current = await c.env.BLOG_CLAPS.get(key);
  const next = parseInt(current || &amp;#39;0&amp;#39;, 10) + increment;
  
  await c.env.BLOG_CLAPS.put(key, next.toString());
  
  return c.json({ count: next });
});

export default app;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I implemented a simple rate limiter using the user&amp;#39;s IP. You can clap up to 50 times per post per hour. This stops script kiddies from inflating the numbers while letting enthusiastic readers (like my mom) click away.&lt;/p&gt;
&lt;h2&gt;The frontend (Preact island)&lt;/h2&gt;
&lt;p&gt;For the button, I used &lt;strong&gt;Preact&lt;/strong&gt;. It&amp;#39;s tiny (3KB) and I don&amp;#39;t need the full weight of React for a single button.&lt;/p&gt;
&lt;p&gt;The tricky part is making it feel instant. I use optimistic UI updates: the number goes up immediately when you click, even before the server responds. I also debounce the network requests. If you click 10 times in one second, the browser only sends one request to the server saying &amp;quot;add 10&amp;quot;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-tsx&quot;&gt;// src/components/ClapButton.tsx
import { useState, useRef } from &amp;#39;preact/hooks&amp;#39;;
import confetti from &amp;#39;canvas-confetti&amp;#39;;

export default function ClapButton({ slug, apiUrl }) {
  const [count, setCount] = useState(0);
  const [userClaps, setUserClaps] = useState(0); // Track local session
  const debounceTimer = useRef(null);
  const pendingClaps = useRef(0);

  const handleClap = () =&amp;gt; {
    if (userClaps &amp;gt;= 50) return;

    // 1. Optimistic Update
    setCount(c =&amp;gt; c + 1);
    setUserClaps(c =&amp;gt; c + 1);
    pendingClaps.current += 1;

    // 2. Confetti! 🎉
    confetti({
      particleCount: 15,
      spread: 40,
      origin: { y: 0.7 },
      colors: [&amp;#39;#FFD700&amp;#39;, &amp;#39;#FFA500&amp;#39;]
    });

    // 3. Debounce API Call
    if (debounceTimer.current) clearTimeout(debounceTimer.current);
    
    debounceTimer.current = setTimeout(async () =&amp;gt; {
      const payload = { count: pendingClaps.current };
      pendingClaps.current = 0;
      
      await fetch(`${apiUrl}/claps/${slug}`, {
        method: &amp;#39;POST&amp;#39;,
        body: JSON.stringify(payload)
      });
    }, 1000);
  };

  return (
    &amp;lt;button onClick={handleClap} disabled={userClaps &amp;gt;= 50}&amp;gt;
      &amp;lt;span className=&amp;quot;text-2xl&amp;quot;&amp;gt;👏&amp;lt;/span&amp;gt;
      &amp;lt;span className=&amp;quot;font-bold&amp;quot;&amp;gt;+{userClaps}&amp;lt;/span&amp;gt;
    &amp;lt;/button&amp;gt;
  );
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Integrating into Astro&lt;/h2&gt;
&lt;p&gt;Astro makes this trivial. I just drop the component into my layout and tell Astro to hydrate it when it becomes visible.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-astro&quot;&gt;&amp;lt;!-- src/layouts/PostDetails.astro --&amp;gt;
&amp;lt;div class=&amp;quot;my-8 flex justify-between&amp;quot;&amp;gt;
  &amp;lt;ShareLinks /&amp;gt;
  &amp;lt;ClapButton slug={slug} client:visible /&amp;gt;
&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;client:visible&lt;/code&gt; directive matters. If a user never scrolls down to the bottom of the article, the JavaScript for this button never loads.&lt;/p&gt;
&lt;p&gt;It&amp;#39;s a small feature, but it closes the loop. Writing into the void is hard. Seeing a counter go up, even by one, is a nice reminder that there&amp;#39;s a human on the other side of the screen.&lt;/p&gt;
&lt;p&gt;Go ahead, try it out below.&lt;/p&gt;
</content:encoded></item><item><title>Why &quot;Dumb&quot; Agents Are Winning: The Case for Shell-First AI</title><link>https://rezhajul.io/posts/why-dumb-agents-are-winning-shell-first-ai/</link><guid isPermaLink="true">https://rezhajul.io/posts/why-dumb-agents-are-winning-shell-first-ai/</guid><description>Smart agents are great, but simple ones actually ship code. Why tools like Pi and OpenClaw beat complex LSP-based assistants.</description><pubDate>Tue, 03 Feb 2026 10:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Complexity is the enemy of reliability.&lt;/p&gt;
&lt;p&gt;There&amp;#39;s a split happening in the AI coding space. On one side, you have agents that try to replicate the full IDE experience. They spin up language servers, parse ASTs, manage context windows, and attempt to reason like a senior engineer.&lt;/p&gt;
&lt;p&gt;On the other side, you have tools like Pi (by badlogicgames) and OpenClaw (what I use). They&amp;#39;re comparatively stupid. No deep semantic understanding of your codebase. They mostly just run shell commands.&lt;/p&gt;
&lt;p&gt;Guess which ones I actually use.&lt;/p&gt;
&lt;h2&gt;The bloat problem&lt;/h2&gt;
&lt;p&gt;I keep seeing people praise Pi for being simple. And honestly? That tracks.&lt;/p&gt;
&lt;p&gt;When an agent tries to be too smart, things break. Parsing a large codebase to build the &amp;quot;perfect&amp;quot; context window takes forever. If the LSP crashes or the environment is slightly off, the whole thing hangs. And here&amp;#39;s the weird part: giving an LLM 100 files of context often confuses it more than giving it 3 relevant ones.&lt;/p&gt;
&lt;p&gt;I&amp;#39;ve tried the &amp;quot;smart&amp;quot; agents. They feel like driving a Tesla that decides to update its firmware while you&amp;#39;re on the highway. Cool tech. But I just need groceries.&lt;/p&gt;
&lt;h2&gt;When smart agents choke&lt;/h2&gt;
&lt;p&gt;I keep seeing the same complaints on Twitter and HN. Someone asks an agent a simple question about a medium-sized codebase. The agent spends 3 minutes &amp;quot;analyzing&amp;quot; before it can answer where a function is defined.&lt;/p&gt;
&lt;p&gt;Meanwhile &lt;code&gt;grep -rn &amp;quot;func DoThing&amp;quot; .&lt;/code&gt; would have answered in 200 milliseconds.&lt;/p&gt;
&lt;p&gt;Or the agent tries to refactor something and loads the entire project into context, including node_modules from a completely unrelated frontend folder. Then it hallucinates imports from packages that aren&amp;#39;t installed. The &amp;quot;smart&amp;quot; context selection makes things worse.&lt;/p&gt;
&lt;p&gt;Maybe people are using these tools wrong. But the pattern seems consistent: agent tries to be clever, agent gets confused, developer wastes 10 minutes watching a spinner.&lt;/p&gt;
&lt;h2&gt;Shell-first&lt;/h2&gt;
&lt;p&gt;My self-hosted agent, Cici (running OpenClaw), works on a simpler principle: if you can do it in the terminal, she can do it too.&lt;/p&gt;
&lt;p&gt;No magic &amp;quot;Refactor Codebase&amp;quot; button. She just runs:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;grep -r &amp;quot;pattern&amp;quot; .&lt;/code&gt; to find files&lt;/li&gt;
&lt;li&gt;&lt;code&gt;read file.ts&lt;/code&gt; to see content&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sed&lt;/code&gt; or &lt;code&gt;write&lt;/code&gt; to change it&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bun test&lt;/code&gt; to verify&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This is how I work anyway. It&amp;#39;s transparent. If &lt;code&gt;grep&lt;/code&gt; fails, I see the exit code. I don&amp;#39;t have to guess why some internal &amp;quot;thinking process&amp;quot; got stuck.&lt;/p&gt;
&lt;p&gt;It&amp;#39;s not magic. It&amp;#39;s just unix.&lt;/p&gt;
&lt;h2&gt;The server migration&lt;/h2&gt;
&lt;p&gt;Yesterday, I migrated my home server. I asked Cici to find all &lt;code&gt;.avi&lt;/code&gt; files, convert them to &lt;code&gt;.mp4&lt;/code&gt;, and delete the originals.&lt;/p&gt;
&lt;p&gt;A smarter agent might have analyzed video metadata, checked codecs using some library, or asked me about bitrate preferences.&lt;/p&gt;
&lt;p&gt;Cici just ran &lt;code&gt;find&lt;/code&gt; piped to &lt;code&gt;ffmpeg&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;find /mnt/data -name &amp;quot;*.avi&amp;quot; -exec ffmpeg -i {} ... \;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Brute force. Stupid. Worked perfectly while I was at the office.&lt;/p&gt;
&lt;h2&gt;Self-hosting makes this easier&lt;/h2&gt;
&lt;p&gt;The shell-first approach is easy to extend. If I want my agent to support a new tool, I don&amp;#39;t wait for a plugin update. I just install the CLI.&lt;/p&gt;
&lt;p&gt;Need network speed? &lt;code&gt;speedtest-cli&lt;/code&gt;. Docker management? Already there.&lt;/p&gt;
&lt;p&gt;The agent is an extension of my terminal, which is the thing I actually know how to use.&lt;/p&gt;
&lt;h2&gt;The obvious downsides&lt;/h2&gt;
&lt;p&gt;Look, I&amp;#39;m not saying dumb agents are better at everything.&lt;/p&gt;
&lt;p&gt;Cici is bad at renaming variables. She&amp;#39;ll grep for the string and replace it, which works until there&amp;#39;s another variable with a similar name or the string shows up in a comment. Then she breaks things.&lt;/p&gt;
&lt;p&gt;She also can&amp;#39;t debug across multiple files. If an error involves understanding how three modules interact, she&amp;#39;s useless. She doesn&amp;#39;t hold that kind of context.&lt;/p&gt;
&lt;p&gt;For those cases I just open Ampcode in Termius and paste in the prompt myself. It&amp;#39;s more manual but at least I know what context it&amp;#39;s working with.&lt;/p&gt;
&lt;h2&gt;Why I trust dumb tools more&lt;/h2&gt;
&lt;p&gt;This probably sounds backwards, but I trust the dumb agent more because I can see what it&amp;#39;s doing.&lt;/p&gt;
&lt;p&gt;When Cici messes up, I see the command. I can run it myself. I can fix it.&lt;/p&gt;
&lt;p&gt;When a smart agent messes up, I&amp;#39;m left guessing. Did it read the wrong file? Did it get confused by something in the context? Who knows. The failure mode is a 30-second spinner followed by nonsense.&lt;/p&gt;
&lt;h2&gt;The middle ground exists&lt;/h2&gt;
&lt;p&gt;Someone&amp;#39;s going to point out that tools like Sourcegraph or zread.ai solve the context problem without going full &amp;quot;dumb agent.&amp;quot; And yeah, fair. Code search that actually understands your codebase is different from an agent that tries to load everything into memory.&lt;/p&gt;
&lt;p&gt;I haven&amp;#39;t used zread.ai but I&amp;#39;ve messed with Sourcegraph and it&amp;#39;s good at finding the right files fast. If something like that fed context into an LLM instead of the agent trying to figure it out itself, that might actually work.&lt;/p&gt;
&lt;p&gt;Maybe the answer isn&amp;#39;t &amp;quot;dumb agents&amp;quot; vs &amp;quot;smart agents&amp;quot; but &amp;quot;agents that let humans pick context&amp;quot; vs &amp;quot;agents that guess.&amp;quot; I&amp;#39;m not sure. Still figuring it out.&lt;/p&gt;
&lt;h2&gt;Anyway&lt;/h2&gt;
&lt;p&gt;I&amp;#39;m probably wrong about half of this. The smart agents will get better. Someone will figure out how to do context selection without eating my entire node_modules folder.&lt;/p&gt;
&lt;p&gt;But for now? I&amp;#39;ll take grep and an LLM over a black-box agent.&lt;/p&gt;
</content:encoded></item><item><title>The Reality of AI Pair Programming: It&apos;s Not Magic, It&apos;s Management</title><link>https://rezhajul.io/posts/reality-ai-pair-programming-management/</link><guid isPermaLink="true">https://rezhajul.io/posts/reality-ai-pair-programming-management/</guid><description>Vibe coding is dangerous. AI isn&apos;t a senior engineer replacement, it&apos;s a junior that needs a manager. Here&apos;s what I learned migrating my home server with an AI agent.</description><pubDate>Mon, 02 Feb 2026 14:20:00 GMT</pubDate><content:encoded>&lt;p&gt;The hype around AI coding agents like Cursor, Copilot, Claude Code is loud. Twitter (or X, whatever) is full of demos showing agents building entire apps from a single prompt. People call it &amp;quot;vibe coding.&amp;quot; You just vibe, the AI codes.&lt;/p&gt;
&lt;p&gt;But after spending a weekend migrating my home server with Cici (my OpenClaw agent), I have a different take.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AI doesn&amp;#39;t replace the engineer. It forces you to become an engineering manager.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;The &amp;quot;vibe coding&amp;quot; trap&lt;/h2&gt;
&lt;p&gt;Dax, creator of SST and OpenCode, &lt;a href=&quot;https://x.com/thdxr/status/2017843016325648878&quot;&gt;said it best&lt;/a&gt;: &amp;quot;you can vibe code all of it who cares... but everyone will know they can feel it in your work how lazy you&amp;#39;ve become.&amp;quot;&lt;/p&gt;
&lt;p&gt;It&amp;#39;s easy to get addicted to the speed. You ask for a feature, the terminal flies, and suddenly you have code. But AI is a yes-man. It will confidently generate code that works &lt;em&gt;now&lt;/em&gt; but rots &lt;em&gt;later&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;I ran into this. I asked Cici to check my router.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Cici:&lt;/strong&gt; &amp;quot;I&amp;#39;ll try to hack the login page using a standard POST request!&amp;quot;
&lt;strong&gt;Me:&lt;/strong&gt; &amp;quot;Wait, that&amp;#39;s a modern TP-Link AX router. It uses a Vue.js SPA and encrypted tokens. That old &lt;code&gt;curl&lt;/code&gt; method won&amp;#39;t work.&amp;quot;&lt;/p&gt;
&lt;p&gt;If I hadn&amp;#39;t stepped in, the agent would have wasted cycles trying 2015-era exploits on 2025 hardware. That&amp;#39;s the trap. If you don&amp;#39;t know the domain, you can&amp;#39;t manage the agent. You&amp;#39;re just blindly approving pull requests from a junior dev who drank too much coffee.&lt;/p&gt;
&lt;h2&gt;From writer to editor&lt;/h2&gt;
&lt;p&gt;I used to spend 80% of my time typing syntax. &lt;code&gt;for (let i = 0; i &amp;lt; n; i++)&lt;/code&gt;. Now, I type English.&lt;/p&gt;
&lt;p&gt;But the mental load hasn&amp;#39;t disappeared. It shifted.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Before, I was the writer. I owned every character.&lt;/li&gt;
&lt;li&gt;Now, I am the editor. I review diffs and catch logic errors before they ship.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is actually &lt;em&gt;harder&lt;/em&gt; in some ways. Reading code is harder than writing it. When you write, you build a mental model step-by-step. When you review AI code, you have to reverse-engineer its mental model instantly to spot bugs.&lt;/p&gt;
&lt;p&gt;If you get lazy and stop reading the code, if you just &amp;quot;vibe,&amp;quot; you are building technical debt at 100x speed.&lt;/p&gt;
&lt;h2&gt;Context is king&lt;/h2&gt;
&lt;p&gt;AI is smart, but it&amp;#39;s amnesiac. It doesn&amp;#39;t know your network topology, your naming conventions, or why you hate &lt;code&gt;useEffect&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The success of my server migration wasn&amp;#39;t because the model was a genius. It was because I spent hours writing documentation &lt;em&gt;for&lt;/em&gt; the agent.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;TOOLS.md&lt;/code&gt;: Listed my server IP, router model, and storage paths.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;MEMORY.md&lt;/code&gt;: Logged past decisions so we didn&amp;#39;t repeat mistakes.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When I asked Cici to &lt;em&gt;move the backup folder&lt;/em&gt;, she didn&amp;#39;t ask &amp;quot;which folder?&amp;quot; or &amp;quot;where?&amp;quot;. She read the context, found the path &lt;code&gt;/mnt/data/Downloads/Backup&lt;/code&gt;, and executed the move.&lt;/p&gt;
&lt;p&gt;Documentation is no longer just for humans. It&amp;#39;s the API for your AI agent.&lt;/p&gt;
&lt;h2&gt;It&amp;#39;s still fun (but different)&lt;/h2&gt;
&lt;p&gt;Don&amp;#39;t get me wrong, I&amp;#39;m not going back.&lt;/p&gt;
&lt;p&gt;Handling that migration manually would have been a chore. Finding 40 scattered &lt;code&gt;.avi&lt;/code&gt; files, converting them to &lt;code&gt;.mp4&lt;/code&gt;, and deleting the originals? That&amp;#39;s boring work.&lt;/p&gt;
&lt;p&gt;With the agent, I said:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Find all AVIs, convert them to MP4 using 2 cores (low priority), and delete the source.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And it happened in the background while I went to work. That felt like a superpower.&lt;/p&gt;
&lt;p&gt;We aren&amp;#39;t &amp;quot;coding&amp;quot; less. We&amp;#39;re solving problems faster. The syntax is becoming optional.&lt;/p&gt;
&lt;p&gt;Treat your AI agent like a talented, eager, but slightly hallucinating intern.&lt;/p&gt;
&lt;p&gt;Give them clear docs and review their work before it hits production. And please, for the love of clean code, don&amp;#39;t just &amp;quot;vibe.&amp;quot;&lt;/p&gt;
&lt;p&gt;Manage them.&lt;/p&gt;
</content:encoded></item><item><title>Reducing OpenClaw Heartbeat Token Usage</title><link>https://rezhajul.io/posts/reducing-openclaw-heartbeat-token-usage/</link><guid isPermaLink="true">https://rezhajul.io/posts/reducing-openclaw-heartbeat-token-usage/</guid><description>Running an AI agent 24/7 sounds cool until you see the token bill. Here&apos;s how to cut your heartbeat costs.</description><pubDate>Sun, 01 Feb 2026 14:30:00 GMT</pubDate><content:encoded>&lt;p&gt;Running an AI agent 24/7 sounds cool until you see the token bill. One of the biggest culprits? Heartbeats, those periodic check-ins that keep your agent aware.&lt;/p&gt;
&lt;p&gt;Here&amp;#39;s how I cut mine down.&lt;/p&gt;
&lt;h2&gt;What are heartbeats?&lt;/h2&gt;
&lt;p&gt;In OpenClaw (formerly Clawdbot), heartbeats are periodic polls that wake your agent to check for pending tasks. The default behavior:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Agent receives a heartbeat prompt&lt;/li&gt;
&lt;li&gt;Agent reads &lt;code&gt;HEARTBEAT.md&lt;/code&gt; and workspace context&lt;/li&gt;
&lt;li&gt;Agent either takes action or replies &lt;code&gt;HEARTBEAT_OK&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Every heartbeat consumes tokens for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The system prompt and agent identity&lt;/li&gt;
&lt;li&gt;Reading &lt;code&gt;HEARTBEAT.md&lt;/code&gt; and referenced files&lt;/li&gt;
&lt;li&gt;Any actions taken or responses generated&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;At the default 30-minute interval, that&amp;#39;s 48 heartbeats per day. The tokens add up fast.&lt;/p&gt;
&lt;h2&gt;Strategy 1: Slim down HEARTBEAT.md&lt;/h2&gt;
&lt;p&gt;Your &lt;code&gt;HEARTBEAT.md&lt;/code&gt; gets read on every single heartbeat. Keep it minimal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Before (token-heavy):&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;# Heartbeat Checklist

## Routine Checks
1. Daily Log: Check if memory/YYYY-MM-DD.md exists...
2. Inbox Check: Check urgent emails or notifications...
3. Moltbook Check: Read skills/moltbook/HEARTBEAT.md for full routine...
4. Log Rotation: Check date header in memory/moltbook_activity.md...
5. Self-Correction: If IDENTITY.md or SOUL.md is missing...

## Pending Tasks
- [ ] Retry Moltbook Post (Security)...
- [ ] Retry Moltbook Post (Sandboxing)...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;After (lean):&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;# Heartbeat
- Ensure `memory/YYYY-MM-DD.md` exists
- Check DMs if due (see heartbeat-state.json)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Move pending tasks to &lt;strong&gt;cron jobs&lt;/strong&gt; instead of tracking them in HEARTBEAT.md. Cron jobs run in isolated sessions and don&amp;#39;t bloat your main context.&lt;/p&gt;
&lt;h2&gt;Strategy 2: Increase the interval&lt;/h2&gt;
&lt;p&gt;Edit your &lt;code&gt;~/.openclaw/openclaw.json&lt;/code&gt; (or &lt;code&gt;%USERPROFILE%\.openclaw\openclaw.json&lt;/code&gt; on Windows) and add heartbeat configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;agents&amp;quot;: {
    &amp;quot;defaults&amp;quot;: {
      &amp;quot;heartbeat&amp;quot;: {
        &amp;quot;every&amp;quot;: &amp;quot;60m&amp;quot;
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;every&lt;/code&gt; field accepts duration strings (&lt;code&gt;ms&lt;/code&gt;, &lt;code&gt;s&lt;/code&gt;, &lt;code&gt;m&lt;/code&gt;, &lt;code&gt;h&lt;/code&gt;); default unit is minutes. Going from 30 to 60 minutes cuts your heartbeat tokens in half. Hourly check-ins are fine for most agents.&lt;/p&gt;
&lt;h2&gt;Strategy 3: Use a cheaper model&lt;/h2&gt;
&lt;p&gt;Heartbeats are usually simple &amp;quot;check and respond&amp;quot; operations. They don&amp;#39;t need your most powerful model.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;agents&amp;quot;: {
    &amp;quot;defaults&amp;quot;: {
      &amp;quot;heartbeat&amp;quot;: {
        &amp;quot;model&amp;quot;: &amp;quot;qwen-portal/coder-model&amp;quot;
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A lighter model means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Faster responses&lt;/li&gt;
&lt;li&gt;Lower token costs (if using paid APIs like OpenAI/Anthropic)&lt;/li&gt;
&lt;li&gt;Less compute for simple yes/no decisions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This saves on &lt;em&gt;generation&lt;/em&gt; costs. The input context (files, memory) is still loaded, so keeping &lt;code&gt;HEARTBEAT.md&lt;/code&gt; small (Strategy 1) is still crucial.&lt;/p&gt;
&lt;h2&gt;Strategy 4: Replace heartbeats with cron jobs&lt;/h2&gt;
&lt;p&gt;Heartbeats are convenient but inefficient for scheduled tasks. Compare:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Heartbeat&lt;/th&gt;
&lt;th&gt;Cron Job&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;Session&lt;/td&gt;
&lt;td&gt;Main (carries context)&lt;/td&gt;
&lt;td&gt;Isolated (clean slate)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Timing&lt;/td&gt;
&lt;td&gt;Approximate (~30 min)&lt;/td&gt;
&lt;td&gt;Exact (cron expression)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Token cost&lt;/td&gt;
&lt;td&gt;High (loads full context)&lt;/td&gt;
&lt;td&gt;Low (minimal context)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Quick checks, batching&lt;/td&gt;
&lt;td&gt;Scheduled tasks, reports&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;p&gt;If your heartbeat is mostly running scheduled tasks, migrate them to cron:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;openclaw cron add --name &amp;quot;daily-log-check&amp;quot; \
  --schedule &amp;quot;0 9 * * *&amp;quot; \
  --message &amp;quot;Ensure memory/$(date +%Y-%m-%d).md exists&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Strategy 5: Disable heartbeats entirely&lt;/h2&gt;
&lt;p&gt;If you&amp;#39;re fully cron-driven, turn off heartbeats:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;agents&amp;quot;: {
    &amp;quot;defaults&amp;quot;: {
      &amp;quot;heartbeat&amp;quot;: {
        &amp;quot;enabled&amp;quot;: false
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Your agent will only wake for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Direct messages&lt;/li&gt;
&lt;li&gt;Cron job triggers&lt;/li&gt;
&lt;li&gt;Webhook events&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;No more background token burn.&lt;/p&gt;
&lt;h2&gt;Quick wins&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Audit your &lt;code&gt;HEARTBEAT.md&lt;/code&gt;. Delete anything that could be a cron job.&lt;/li&gt;
&lt;li&gt;Remove nested file reads (don&amp;#39;t reference other HEARTBEAT.md files).&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;every&lt;/code&gt; to &lt;code&gt;&amp;quot;60m&amp;quot;&lt;/code&gt; or higher.&lt;/li&gt;
&lt;li&gt;Use a lighter model for heartbeat responses.&lt;/li&gt;
&lt;li&gt;Track check timestamps in &lt;code&gt;memory/heartbeat-state.json&lt;/code&gt; to avoid redundant work.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The math&lt;/h2&gt;
&lt;p&gt;Say your heartbeat consumes ~2,000 tokens per cycle:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Interval&lt;/th&gt;
&lt;th&gt;Daily Heartbeats&lt;/th&gt;
&lt;th&gt;Daily Tokens&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;&lt;tr&gt;
&lt;td&gt;15 min&lt;/td&gt;
&lt;td&gt;96&lt;/td&gt;
&lt;td&gt;192,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;30 min&lt;/td&gt;
&lt;td&gt;48&lt;/td&gt;
&lt;td&gt;96,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;60 min&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;48,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2 hours&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;24,000&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;p&gt;Doubling your interval = halving your cost. Simple.&lt;/p&gt;
&lt;p&gt;Heartbeats are easy to over-use. Trim your &lt;code&gt;HEARTBEAT.md&lt;/code&gt;, bump the interval to 60 minutes, move scheduled work to cron jobs, and consider a lighter model.&lt;/p&gt;
&lt;p&gt;Your agent can still respond when you need it without running up the bill.&lt;/p&gt;
</content:encoded></item><item><title>Adding a Links Section to My Blog</title><link>https://rezhajul.io/posts/adding-a-links-section-to-my-blog/</link><guid isPermaLink="true">https://rezhajul.io/posts/adding-a-links-section-to-my-blog/</guid><description>I built a links section to share interesting things I find around the web, with my own commentary.</description><pubDate>Sun, 25 Jan 2026 14:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I keep finding cool stuff on the internet. Articles, tools, random projects.&lt;/p&gt;
&lt;p&gt;The problem? Most of them don&amp;#39;t deserve a full blog post. But I still want to share them somewhere. Twitter feels too ephemeral. A blog post feels too heavy.&lt;/p&gt;
&lt;p&gt;Then I saw Robb Knight&amp;#39;s &lt;a href=&quot;https://rknight.me/links/&quot;&gt;links page&lt;/a&gt;. Perfect. A micro-blog for external content with my own hot takes attached.&lt;/p&gt;
&lt;p&gt;So I built one. Here&amp;#39;s how.&lt;/p&gt;
&lt;h2&gt;The schema&lt;/h2&gt;
&lt;p&gt;Every link is just a markdown file with a URL field. The body is my commentary.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;const linksSchema = ({ image }: { image: () =&amp;gt; z.ZodType }) =&amp;gt;
  baseSchema({ image }).extend({
    url: z.string().url(),
    commentaryPrefix: z.string().optional(),
  });
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A link file looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;---
title: &amp;quot;Some Interesting Article&amp;quot;
pubDatetime: 2026-01-25T10:00:00+07:00
url: &amp;quot;https://example.com/article&amp;quot;
tags:
  - Web
---

My thoughts on this article go here.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Simple. The markdown body becomes my take on whatever I&amp;#39;m sharing.&lt;/p&gt;
&lt;h2&gt;The card component&lt;/h2&gt;
&lt;p&gt;Each link shows up as a card. Title links out (with a little ↗ arrow), date links to the detail page, and my commentary renders below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-astro&quot;&gt;&amp;lt;li class=&amp;quot;my-6&amp;quot;&amp;gt;
  &amp;lt;a href={url} target=&amp;quot;_blank&amp;quot; rel=&amp;quot;noopener noreferrer&amp;quot;&amp;gt;
    &amp;lt;span&amp;gt;{title}&amp;lt;/span&amp;gt;
    &amp;lt;svg&amp;gt;&amp;lt;!-- arrow icon --&amp;gt;&amp;lt;/svg&amp;gt;
  &amp;lt;/a&amp;gt;
  &amp;lt;a href={`/links/${id}/`}&amp;gt;
    &amp;lt;Datetime {...datetimeProps} /&amp;gt;
  &amp;lt;/a&amp;gt;
  &amp;lt;div class=&amp;quot;prose&amp;quot;&amp;gt;
    &amp;lt;Content /&amp;gt;
  &amp;lt;/div&amp;gt;
&amp;lt;/li&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;render()&lt;/code&gt; function from &lt;code&gt;astro:content&lt;/code&gt; does the heavy lifting, turning markdown into a component I can just drop in.&lt;/p&gt;
&lt;h2&gt;Why detail pages?&lt;/h2&gt;
&lt;p&gt;I could&amp;#39;ve just made links a flat list. But I wanted each link to have its own URL. Why?&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Shareable.&lt;/strong&gt; I can send someone &lt;code&gt;/links/cool-article/&lt;/code&gt; instead of &amp;quot;check my links page and scroll down.&amp;quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Comments.&lt;/strong&gt; Each link can have its own Mastodon thread.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Navigation.&lt;/strong&gt; Previous/next buttons let you browse through links like posts.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;OG images&lt;/h2&gt;
&lt;p&gt;This is the fun part.&lt;/p&gt;
&lt;p&gt;I wanted links to look different when shared. The OG image shows a &lt;strong&gt;[LINK]&lt;/strong&gt; badge, the title, and the domain extracted from the URL.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;function extractDomain(url) {
  try {
    return new URL(url).hostname.replace(/^www\./, &amp;quot;&amp;quot;);
  } catch {
    return &amp;quot;&amp;quot;;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There&amp;#39;s also a &lt;code&gt;commentaryPrefix&lt;/code&gt; field. Defaults to &amp;quot;Rezha on&amp;quot; but I can customize it per link. So when someone shares my link on social media, it says &amp;quot;Rezha on Some Interesting Article&amp;quot; with the source domain below.&lt;/p&gt;
&lt;p&gt;The OG images generate at build time:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;export const GET: APIRoute = async ({ props }) =&amp;gt; {
  const buffer = await generateOgImageForLink(props);
  return new Response(new Uint8Array(buffer), {
    headers: { &amp;quot;Content-Type&amp;quot;: &amp;quot;image/png&amp;quot; },
  });
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here&amp;#39;s an example&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/links/agent-psychosis/&quot;&gt;&lt;img src=&quot;/links/agent-psychosis/index.png&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Scaffolding&lt;/h2&gt;
&lt;p&gt;I already had &lt;code&gt;bun run new&lt;/code&gt; for posts and notes. Added links to the mix.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ bun run new
? What do you want to create? › link
? URL: › https://example.com/article
? Title: › Some Interesting Article

✅ Created: src/data/links/some-interesting-article.md
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It grabs the URL, asks for a title, and generates the frontmatter. I just fill in my commentary and publish.&lt;/p&gt;
&lt;h2&gt;The result&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;List page at &lt;code&gt;/links/&lt;/code&gt; with pagination&lt;/li&gt;
&lt;li&gt;Detail pages at &lt;code&gt;/links/[slug]/&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Custom OG images&lt;/li&gt;
&lt;li&gt;Previous/next navigation&lt;/li&gt;
&lt;li&gt;Works with the existing tag system&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Total time: a weekend. Most of it was fiddling with the OG image layout.&lt;/p&gt;
&lt;p&gt;Is it overkill for sharing links? Maybe. But now I have a place for all those &amp;quot;this is cool but not blog-post-worthy&amp;quot; finds.&lt;/p&gt;
&lt;p&gt;Check it out at &lt;a href=&quot;/links/&quot;&gt;/links/&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Pair Programming with a Lobster: My Week with Clawdbot</title><link>https://rezhajul.io/posts/pair-programming-with-a-lobster/</link><guid isPermaLink="true">https://rezhajul.io/posts/pair-programming-with-a-lobster/</guid><description>It’s weird, it lives in my server, and it runs 100 laps when it makes a mistake. Here is what it&apos;s like to code with an autonomous agent.</description><pubDate>Fri, 23 Jan 2026 23:44:00 GMT</pubDate><content:encoded>&lt;p&gt;Meet the Lobster. 🦞&lt;/p&gt;
&lt;p&gt;No, not dinner. I&amp;#39;m talking about &lt;strong&gt;Clawdbot&lt;/strong&gt;, an AI agent that lives in my server.&lt;/p&gt;
&lt;p&gt;Most AI tools are just chat apps. You ask, they answer. Boring.
Clawdbot is different. It has shell access. It can run commands, edit files, and deploy my code. It&amp;#39;s like having a junior dev who types really fast but sometimes breaks things.&lt;/p&gt;
&lt;p&gt;I used it to move this blog from Zola to Astro. Here&amp;#39;s what happened.&lt;/p&gt;
&lt;h2&gt;The &amp;quot;Intern&amp;quot; Vibe&lt;/h2&gt;
&lt;p&gt;It changes how you code. You stop typing and start managing.
I say: &lt;em&gt;&amp;quot;Split the blog into notes and posts.&amp;quot;&lt;/em&gt;
It executes: Writes the config, moves the files.&lt;/p&gt;
&lt;p&gt;But it makes mistakes. Once, it broke all my image links with a bad find-and-replace command.
I told it: &lt;em&gt;&amp;quot;You broke it. Fix it.&amp;quot;&lt;/em&gt;
And it did. It even apologized.&lt;/p&gt;
&lt;p&gt;I once made it run 100 laps as punishment. It actually counted to 100.&lt;/p&gt;
&lt;h2&gt;Why It&amp;#39;s Cool&lt;/h2&gt;
&lt;h3&gt;1. It Runs Commands&lt;/h3&gt;
&lt;p&gt;When a build fails, I don&amp;#39;t copy-paste errors. Clawdbot sees the error because &lt;em&gt;it&lt;/em&gt; ran the build. It fixes the config and tries again. Fast.&lt;/p&gt;
&lt;h3&gt;2. It Remembers&lt;/h3&gt;
&lt;p&gt;I don&amp;#39;t have to keep saying &amp;quot;I use Bun&amp;quot; or &amp;quot;My timezone is Jakarta.&amp;quot; It remembers.&lt;/p&gt;
&lt;h3&gt;3. Infinite Toolbelt&lt;/h3&gt;
&lt;p&gt;It&amp;#39;s not just a coder. I gave it the &lt;code&gt;bird&lt;/code&gt; skill, and now it runs a cron job to summarize my Twitter/X timeline. It uses &lt;code&gt;gog&lt;/code&gt; to check my calendar, and &lt;code&gt;intomd&lt;/code&gt; to parse documentation. It even has a &lt;code&gt;frontend-design&lt;/code&gt; skill to help me build UI components.&lt;/p&gt;
&lt;h3&gt;4. It Listens (Literally)&lt;/h3&gt;
&lt;p&gt;I don&amp;#39;t always want to type commands. Sometimes I just send a voice note on Telegram: &lt;em&gt;&amp;quot;Deploy to staging.&amp;quot;&lt;/em&gt;
Clawdbot transcribes the audio, understands the intent, and executes the code. It feels less like a terminal and more like talking to a teammate.&lt;/p&gt;
&lt;h3&gt;5. It Debugs Itself (Sometimes)&lt;/h3&gt;
&lt;p&gt;When a deployment failed because of a permission error, Clawdbot didn&amp;#39;t just crash. It suggested a fix: &lt;em&gt;&amp;quot;Revert the commit and try again.&amp;quot;&lt;/em&gt; It reads its own error logs. That&amp;#39;s better than most interns.&lt;/p&gt;
&lt;h2&gt;The Scary Part&lt;/h2&gt;
&lt;p&gt;Giving AI &lt;strong&gt;shell access&lt;/strong&gt; is scary. I watched it delete folders and rewrite configs.
But the speed? Crazy.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Built a custom CLI content generator.&lt;/li&gt;
&lt;li&gt;Set up CI/CD pipelines.&lt;/li&gt;
&lt;li&gt;Added &lt;strong&gt;Mastodon-powered comments&lt;/strong&gt;.
All in one weekend.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Is it perfect? No.
But for the first time, I felt like I was &lt;strong&gt;working with&lt;/strong&gt; AI, not just using it.
It&amp;#39;s messy and fast. Just double-check its work before you deploy.&lt;/p&gt;
</content:encoded></item><item><title>Automating My Blog Workflow with Bun</title><link>https://rezhajul.io/posts/automating-my-blog-workflow-with-bun/</link><guid isPermaLink="true">https://rezhajul.io/posts/automating-my-blog-workflow-with-bun/</guid><description>How I killed the friction of writing by building a custom CLI generator for Astro using Bun.</description><pubDate>Fri, 23 Jan 2026 18:42:00 GMT</pubDate><content:encoded>&lt;p&gt;I have a confession: I hate writing frontmatter.&lt;/p&gt;
&lt;p&gt;That metadata block at the top of every markdown file? The date, the title, the slug, the tags. It&amp;#39;s friction. And friction kills writing habits dead.&lt;/p&gt;
&lt;p&gt;Every time I wanted to write a quick note, I had to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Copy an old post&lt;/li&gt;
&lt;li&gt;Delete the content&lt;/li&gt;
&lt;li&gt;Manually type out the ISO 8601 date (who remembers the current time in UTC?)&lt;/li&gt;
&lt;li&gt;Invent a slug&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;By the time I finished the setup, I&amp;#39;d lost the thought entirely.&lt;/p&gt;
&lt;p&gt;So, in classic developer fashion, I wrote a script to save myself 30 seconds per post. Honestly? It was worth it.&lt;/p&gt;
&lt;h2&gt;Why Bun?&lt;/h2&gt;
&lt;p&gt;I recently migrated this blog to &lt;strong&gt;Astro&lt;/strong&gt;. It&amp;#39;s fast, modern, and runs on JavaScript/TypeScript.&lt;/p&gt;
&lt;p&gt;I didn&amp;#39;t want a heavy CMS. I just wanted a simple CLI command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;bun run new note
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I went with &lt;strong&gt;Bun&lt;/strong&gt; because it treats TypeScript as a first-class citizen. No build steps, no &lt;code&gt;ts-node&lt;/code&gt; configuration hell. You just write &lt;code&gt;.ts&lt;/code&gt; and run it. Plus, it&amp;#39;s incredibly fast for CLI scripts.&lt;/p&gt;
&lt;h2&gt;The script architecture&lt;/h2&gt;
&lt;p&gt;The goal was simple: a script that asks &amp;quot;What do you want to write?&amp;quot; and generates the file for me.&lt;/p&gt;
&lt;p&gt;I split this into two files:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;scripts/new-content.ts&lt;/code&gt; - The main CLI logic&lt;/li&gt;
&lt;li&gt;&lt;code&gt;scripts/content-types.ts&lt;/code&gt; - Configuration for different content types&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Main script (&lt;code&gt;scripts/new-content.ts&lt;/code&gt;)&lt;/h3&gt;
&lt;p&gt;Here&amp;#39;s the core of the implementation. It handles both CLI arguments (for speed) and interactive prompts (for when I&amp;#39;m feeling lazy):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;import { contentTypes, getContentType } from &amp;quot;./content-types&amp;quot;;
import type {
  ContentInput,
  ContentTypeConfig,
  PromptConfig,
} from &amp;quot;./content-types&amp;quot;;
import { mkdir, writeFile } from &amp;quot;node:fs/promises&amp;quot;;
import { join } from &amp;quot;node:path&amp;quot;;

function parseCliArgs() {
  const args = process.argv.slice(2);
  const result: {
    type?: string;
    title?: string;
    tags?: string;
    help?: boolean;
  } = {};
  const positionals: string[] = [];

  for (let i = 0; i &amp;lt; args.length; i++) {
    const arg = args[i];
    if (arg === &amp;quot;-h&amp;quot; || arg === &amp;quot;--help&amp;quot;) {
      result.help = true;
    } else if (arg === &amp;quot;-t&amp;quot; || arg === &amp;quot;--title&amp;quot;) {
      result.title = args[++i];
    } else if (arg === &amp;quot;--tags&amp;quot;) {
      result.tags = args[++i];
    } else if (!arg.startsWith(&amp;quot;-&amp;quot;)) {
      positionals.push(arg);
    }
  }

  return {
    type: positionals[0],
    title: result.title,
    tags: result.tags,
    help: result.help,
  };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Manual parsing because I wanted fine-grained control over flags like &lt;code&gt;-t&lt;/code&gt; and &lt;code&gt;--tags&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;async function createContent(
  type: ContentTypeConfig,
  input: ContentInput
): Promise&amp;lt;string&amp;gt; {
  const filename = type.generateFilename(input);
  const frontmatter = type.generateFrontmatter(input);
  const dirPath = join(process.cwd(), type.path);
  const filePath = join(dirPath, filename);

  await mkdir(dirPath, { recursive: true });
  await writeFile(filePath, frontmatter);

  return filePath;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This writes the file. The &lt;code&gt;{ recursive: true }&lt;/code&gt; flag creates parent directories if needed.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;async function runInteractive() {
  const rl = await createReadlineInterface();

  const type = await selectContentType(rl);
  if (!type) {
    rl.close();
    process.exit(1);
  }

  console.log(`\nCreating new ${type.name}...\n`);

  const promptInputs = await collectPromptInputs(rl, type.prompts || []);

  const input: ContentInput = {
    ...promptInputs,
    now: new Date(),
  };

  const filePath = await createContent(type, input);
  console.log(`\n✅ Created: ${filePath}\n`);

  rl.close();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Interactive mode. Prompts you to pick a content type and fill in the fields.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;async function runNonInteractive(
  typeName: string,
  title?: string,
  tags?: string
) {
  const type = getContentType(typeName.toLowerCase());
  if (!type) {
    console.error(`Unknown content type: ${typeName}`);
    console.log(`Available types: ${contentTypes.map(t =&amp;gt; t.name).join(&amp;quot;, &amp;quot;)}`);
    process.exit(1);
  }

  const hasRequiredTitle = type.prompts?.some(
    p =&amp;gt; p.key === &amp;quot;title&amp;quot; &amp;amp;&amp;amp; p.required
  );
  if (hasRequiredTitle &amp;amp;&amp;amp; !title) {
    console.error(`Error: --title is required for ${type.name}`);
    process.exit(1);
  }

  const input: ContentInput = {
    title,
    tags: tags
      ? tags
          .split(&amp;quot;,&amp;quot;)
          .map(t =&amp;gt; t.trim())
          .filter(Boolean)
      : undefined,
    now: new Date(),
  };

  const filePath = await createContent(type, input);
  console.log(`✅ Created: ${filePath}`);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Non-interactive mode. If you pass arguments directly, it skips the prompts.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;async function main() {
  const args = parseCliArgs();

  if (args.help) {
    showHelp();
    process.exit(0);
  }

  if (args.type) {
    await runNonInteractive(args.type, args.title, args.tags);
  } else {
    await runInteractive();
  }
}

main();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Entry point. If you pass a type, it runs non-interactive. Otherwise, it prompts.&lt;/p&gt;
&lt;h3&gt;Content type configuration (&lt;code&gt;scripts/content-types.ts&lt;/code&gt;)&lt;/h3&gt;
&lt;p&gt;This is where a &amp;quot;Post&amp;quot; differs from a &amp;quot;Note&amp;quot;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;export interface ContentTypeConfig {
  name: string;
  description: string;
  path: string;
  generateFilename: (input: ContentInput) =&amp;gt; string;
  generateFrontmatter: (input: ContentInput) =&amp;gt; string;
  prompts?: PromptConfig[];
}

export interface ContentInput {
  title?: string;
  slug?: string;
  tags?: string[];
  now: Date;
}

export interface PromptConfig {
  key: keyof ContentInput;
  message: string;
  required: boolean;
  transform?: (value: string) =&amp;gt; string | string[];
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The interfaces are self-explanatory. &lt;code&gt;ContentTypeConfig&lt;/code&gt; tells the script where to save files and how to generate the frontmatter.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;function toKebabCase(str: string): string {
  return str
    .toLowerCase()
    .replace(/[^a-z0-9\s-]/g, &amp;quot;&amp;quot;)
    .replace(/\s+/g, &amp;quot;-&amp;quot;)
    .replace(/-+/g, &amp;quot;-&amp;quot;)
    .replace(/^-|-$/g, &amp;quot;&amp;quot;);
}

function formatISOWithTimezone(date: Date): string {
  const pad = (n: number) =&amp;gt; n.toString().padStart(2, &amp;quot;0&amp;quot;);
  const offsetMinutes = -date.getTimezoneOffset();
  const offsetHours = Math.floor(Math.abs(offsetMinutes) / 60);
  const offsetMins = Math.abs(offsetMinutes) % 60;
  const offsetSign = offsetMinutes &amp;gt;= 0 ? &amp;quot;+&amp;quot; : &amp;quot;-&amp;quot;;

  return (
    `${date.getFullYear()}-${pad(date.getMonth() + 1)}-${pad(date.getDate())}` +
    `T${pad(date.getHours())}:${pad(date.getMinutes())}:${pad(date.getSeconds())}` +
    `${offsetSign}${pad(offsetHours)}:${pad(offsetMins)}`
  );
}

function formatHumanReadableDate(date: Date): string {
  const day = date.getDate();
  const month = date.toLocaleString(&amp;quot;en-US&amp;quot;, { month: &amp;quot;long&amp;quot; });
  const year = date.getFullYear();
  const hours = date.getHours().toString().padStart(2, &amp;quot;0&amp;quot;);
  const minutes = date.getMinutes().toString().padStart(2, &amp;quot;0&amp;quot;);
  return `${day} ${month} ${year} at ${hours}:${minutes}`;
}

function formatTimestampFilename(date: Date): string {
  const pad = (n: number) =&amp;gt; n.toString().padStart(2, &amp;quot;0&amp;quot;);
  return (
    `${date.getFullYear()}` +
    `${pad(date.getMonth() + 1)}` +
    `${pad(date.getDate())}` +
    `${pad(date.getHours())}` +
    `${pad(date.getMinutes())}`
  );
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Utility functions for slugs and date formatting. The ISO format with timezone is what Astro expects.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;const postType: ContentTypeConfig = {
  name: &amp;quot;post&amp;quot;,
  description: &amp;quot;Blog post with title and tags&amp;quot;,
  path: &amp;quot;src/data/blog&amp;quot;,
  prompts: [
    {
      key: &amp;quot;title&amp;quot;,
      message: &amp;quot;Post title:&amp;quot;,
      required: true,
    },
    {
      key: &amp;quot;tags&amp;quot;,
      message: &amp;quot;Tags (comma-separated, optional):&amp;quot;,
      required: false,
      transform: (value: string) =&amp;gt;
        value
          .split(&amp;quot;,&amp;quot;)
          .map(t =&amp;gt; t.trim())
          .filter(Boolean),
    },
  ],
  generateFilename: (input: ContentInput) =&amp;gt; {
    const slug = input.slug || toKebabCase(input.title || &amp;quot;untitled&amp;quot;);
    return `${slug}.md`;
  },
  generateFrontmatter: (input: ContentInput) =&amp;gt; {
    const slug = input.slug || toKebabCase(input.title || &amp;quot;untitled&amp;quot;);
    const pubDatetime = formatISOWithTimezone(input.now);
    const tags = input.tags?.length ? input.tags : [];
    const tagsYaml =
      tags.length &amp;gt; 0
        ? `tags:\n${tags.map(t =&amp;gt; `  - ${t}`).join(&amp;quot;\n&amp;quot;)}`
        : &amp;quot;tags: []&amp;quot;;

    return `---
title: &amp;quot;${input.title || &amp;quot;Untitled&amp;quot;}&amp;quot;
slug: &amp;quot;${slug}&amp;quot;
pubDatetime: ${pubDatetime}
description: &amp;quot;&amp;quot;
draft: true
${tagsYaml}
---
`;
  },
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Posts go to &lt;code&gt;src/data/blog&lt;/code&gt;, require a title, and optionally accept tags.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;const noteType: ContentTypeConfig = {
  name: &amp;quot;note&amp;quot;,
  description: &amp;quot;Quick note with timestamp-based naming&amp;quot;,
  path: &amp;quot;src/data/notes&amp;quot;,
  prompts: [],
  generateFilename: (input: ContentInput) =&amp;gt; {
    return `${formatTimestampFilename(input.now)}.md`;
  },
  generateFrontmatter: (input: ContentInput) =&amp;gt; {
    const timestamp = formatTimestampFilename(input.now);
    const humanTitle = formatHumanReadableDate(input.now);
    const pubDatetime = formatISOWithTimezone(input.now);

    return `---
title: &amp;quot;${humanTitle}&amp;quot;
slug: &amp;quot;${timestamp}&amp;quot;
pubDatetime: ${pubDatetime}
description: &amp;quot;&amp;quot;
tags:
  - note
---
`;
  },
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notes have no prompts. They just use the current timestamp as filename and title.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;export const contentTypes: ContentTypeConfig[] = [postType, noteType];

export function getContentType(name: string): ContentTypeConfig | undefined {
  return contentTypes.find(t =&amp;gt; t.name === name);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Export the types so the main script can use them.&lt;/p&gt;
&lt;h2&gt;The &amp;quot;note&amp;quot; workflow&lt;/h2&gt;
&lt;p&gt;My favorite part is the &lt;strong&gt;Note&lt;/strong&gt; generator. Notes on this blog are time-based, like tweets. I don&amp;#39;t want to think about titles.&lt;/p&gt;
&lt;p&gt;The script automatically:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Generates a filename based on the current timestamp (e.g., &lt;code&gt;202601231830.md&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Sets the title to a human-readable format (&amp;quot;23 January 2026 at 18:30&amp;quot;)&lt;/li&gt;
&lt;li&gt;Sets the &lt;code&gt;slug&lt;/code&gt; to the timestamp&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now, capturing a thought is as fast as opening my terminal.&lt;/p&gt;
&lt;h2&gt;Interactive mode&lt;/h2&gt;
&lt;p&gt;If you just run &lt;code&gt;bun run new&lt;/code&gt; without arguments, it prompts you step-by-step.&lt;/p&gt;
&lt;h2&gt;Adding new content types&lt;/h2&gt;
&lt;p&gt;Want a &amp;quot;snippet&amp;quot; or &amp;quot;quote&amp;quot; type? Add a new config object in &lt;code&gt;content-types.ts&lt;/code&gt; with path, filename generator, and frontmatter template. Done.&lt;/p&gt;
&lt;h2&gt;Why bother?&lt;/h2&gt;
&lt;p&gt;The point isn&amp;#39;t saving 30 seconds. It&amp;#39;s removing the friction that stops you from writing in the first place. I used to stare at blank markdown files, dreading the frontmatter setup more than the actual writing.&lt;/p&gt;
&lt;p&gt;Now I type &lt;code&gt;bun run new note&lt;/code&gt; and start writing.&lt;/p&gt;
</content:encoded></item><item><title>Unlocking Frontier Power: How to Run Amp Using CLIProxyAPI Without Spending Credits</title><link>https://rezhajul.io/posts/running-amp-without-using-credits/</link><guid isPermaLink="true">https://rezhajul.io/posts/running-amp-without-using-credits/</guid><description>The landscape of AI coding agents is shifting faster than most developers can keep up with.</description><pubDate>Thu, 25 Dec 2025 11:01:14 GMT</pubDate><content:encoded>&lt;p&gt;The landscape of AI coding agents is shifting faster than most developers can keep up with. Right now, &lt;strong&gt;&lt;a href=&quot;https://ampcode.com/&quot;&gt;Amp&lt;/a&gt;&lt;/strong&gt; is sitting at the frontier, offering raw model power that most tools are still trying to figure out. But if you have been using Amp, you know the drill: the &amp;quot;Smart&amp;quot; mode is incredible, but it eats through Amp credits.&lt;/p&gt;
&lt;p&gt;If you are already paying for ChatGPT Plus, Claude Pro, or a Google subscription, there is a better way. You can leverage &lt;strong&gt;CLIProxyAPI&lt;/strong&gt; to bridge your existing OAuth subscriptions directly into Amp. This setup effectively allows you to use Amp’s advanced agent features while bypassing the credit consumption by routing requests through your own authenticated quotas.&lt;/p&gt;
&lt;p&gt;Here is how to set up this &amp;quot;pro-tier&amp;quot; bridge to get unconstrained token usage with the best models on the market.&lt;/p&gt;
&lt;h2&gt;Why CLIProxyAPI?&lt;/h2&gt;
&lt;p&gt;At its core, &lt;strong&gt;CLIProxyAPI&lt;/strong&gt; is a proxy server designed to connect CLI models to an API setup compatible with platforms like OpenAI, Gemini, and Claude. It acts as a universal adapter, allowing you to call these models through standard API requests rather than just terminal commands.&lt;/p&gt;
&lt;p&gt;The magic happens with the specialized &lt;strong&gt;Amp CLI integration&lt;/strong&gt;. CLIProxyAPI includes specialized routing that supports Amp’s unique API patterns while maintaining compatibility with standard features. It maps Amp’s specific provider patterns—like &lt;code&gt;/api/provider/{provider}/v1...&lt;/code&gt;—to its own internal handlers.&lt;/p&gt;
&lt;p&gt;The architecture is simple but powerful:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Request Received:&lt;/strong&gt; Amp sends a request to your local proxy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Local Check:&lt;/strong&gt; The proxy checks if you have configured an OAuth token for that model locally.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Routing:&lt;/strong&gt; If the model is authenticated (via your subscription), it uses your local OAuth tokens. If not, it falls back to Amp’s backend, which then requires Amp credits.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Step 1: Installing the Infrastructure&lt;/h2&gt;
&lt;p&gt;Before you can bridge your accounts, you need to get CLIProxyAPI running. You have a few options, but building from source or using the pre-compiled binaries is most common for dev environments.&lt;/p&gt;
&lt;h3&gt;Building from Source&lt;/h3&gt;
&lt;p&gt;You will need &lt;strong&gt;Go 1.24 or higher&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/router-for-me/CLIProxyAPI.git
cd CLIProxyAPI
go build -o cli-proxy-api ./cmd/server
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This gives you the &lt;code&gt;cli-proxy-api&lt;/code&gt; executable. If you are on macOS, you can also use Homebrew: &lt;code&gt;brew install cliproxyapi&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Step 2: Authenticating Your Subscriptions&lt;/h2&gt;
&lt;p&gt;This is the most critical step. To use Amp &amp;quot;for free&amp;quot; (using your existing subscriptions), you must authenticate the providers Amp relies on. Amp employs different models for various agent roles:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Smart Mode:&lt;/strong&gt; Uses Claude models like &lt;strong&gt;Opus and Sonnet 4.5&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Oracle Subagent:&lt;/strong&gt; Uses &lt;strong&gt;GPT-5&lt;/strong&gt; (medium reasoning).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Librarian/Rush Mode:&lt;/strong&gt; Uses &lt;strong&gt;Claude Sonnet 4.5&lt;/strong&gt; or &lt;strong&gt;Claude Haiku 4.5&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Run the following commands to log in via OAuth:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;For Google/Gemini:&lt;/strong&gt; &lt;code&gt;./cli-proxy-api --login&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;For Antigravity:&lt;/strong&gt; &lt;code&gt;./cli-proxy-api -antigravity-login&lt;/code&gt;. (Yes, you read it right. We can access Antigravity&amp;#39;s API with the same google account. It&amp;#39;s giving me access to the Claude model as well.)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;For ChatGPT Plus/Pro:&lt;/strong&gt; &lt;code&gt;./cli-proxy-api --codex-login&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;For Claude Pro/Max:&lt;/strong&gt; &lt;code&gt;./cli-proxy-api --claude-login&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you&amp;#39;re using Github Copilot, you need to use &lt;code&gt;https://github.com/router-for-me/CLIProxyAPIPlus&lt;/code&gt;. This one doesn&amp;#39;t have a ready-to-use build via Homebrew so you may need to manually download from the release page or just build it yourself.&lt;/p&gt;
&lt;p&gt;Once you log in through the browser window that pops up, the system saves your tokens in the &lt;code&gt;~/.cli-proxy-api&lt;/code&gt; directory. This enables the proxy to use your included usage quotas rather than Amp&amp;#39;s backend.&lt;/p&gt;
&lt;h2&gt;Step 3: Configuring the Proxy for Amp&lt;/h2&gt;
&lt;p&gt;You need a &lt;code&gt;config.yaml&lt;/code&gt; file to tell the proxy how to communicate with Amp&amp;#39;s control plane for management tasks like thread sharing and user authentication.&lt;/p&gt;
&lt;p&gt;Create or edit your &lt;code&gt;config.yaml&lt;/code&gt; with the following block:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;port: 8317
api-keys:
  - &amp;quot;your-secret-key&amp;quot; # You create this key for Amp to talk to the proxy

debug: true
quota-exceeded:
  switch-project: true
  switch-preview-model: true

ampcode:
  upstream-url: &amp;quot;https://ampcode.com&amp;quot;
  restrict-management-to-localhost: true # Extra security for your local machine
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 4: Pointing Amp to Your Local Proxy&lt;/h2&gt;
&lt;p&gt;Now that the proxy is ready, you need to tell the Amp CLI to stop talking to the cloud and start talking to &lt;code&gt;localhost&lt;/code&gt;. You can do this via environment variables or settings files.&lt;/p&gt;
&lt;h3&gt;Using Environment Variables&lt;/h3&gt;
&lt;p&gt;This is the fastest way to test the setup:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;export AMP_URL=http://localhost:8317
export AMP_API_KEY=your-secret-key
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With these set, the standard &lt;code&gt;amp login&lt;/code&gt; command is no longer necessary.&lt;/p&gt;
&lt;h3&gt;Using Settings Files&lt;/h3&gt;
&lt;p&gt;For a permanent setup, edit your Amp configuration:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;URL:&lt;/strong&gt; In &lt;code&gt;~/.config/amp/settings.json&lt;/code&gt;, set &lt;code&gt;&amp;quot;amp.url&amp;quot;: &amp;quot;http://localhost:8317&amp;quot;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Key:&lt;/strong&gt; In &lt;code&gt;~/.local/share/amp/secrets.json&lt;/code&gt;, set &lt;code&gt;&amp;quot;apiKey@http://localhost:8317&amp;quot;: &amp;quot;your-secret-key&amp;quot;&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Step 5: Validating the Connection&lt;/h2&gt;
&lt;p&gt;Start your proxy:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;./cli-proxy-api --config config.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, run a command in Amp:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;amp &amp;quot;Analyze the current directory and find all markdown files&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If your proxy logs show requests hitting &lt;code&gt;/api/provider/google/...&lt;/code&gt; or &lt;code&gt;/api/provider/openai/...&lt;/code&gt; and being handled locally, you are successfully using your own subscription.&lt;/p&gt;
&lt;h3&gt;The Fallback Safety Net&lt;/h3&gt;
&lt;p&gt;One of the best features of this integration is the &lt;strong&gt;Model Mapping&lt;/strong&gt;. If Amp requests a model you don&amp;#39;t subscribe to, it will automatically map it to another model you&amp;#39;ve mapped to. This ensures Amp never breaks. Here is my mapping.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;model-mappings:
  # Model requested by Amp CLI Smart mode
  - from: &amp;quot;claude-opus-4.5&amp;quot;
    to: &amp;quot;gemini-claude-opus-4-5-thinking&amp;quot; # routed to antigravity
  - from: &amp;quot;claude-opus-4-5-20251101&amp;quot;
    to: &amp;quot;gemini-claude-opus-4-5-thinking&amp;quot;

  # Model requested by Amp CLI Rush mode
  - from: &amp;quot;claude-haiku-4-5-20251001&amp;quot;
    to: &amp;quot;gemini-3-flash-preview&amp;quot;

  # model requested by AMP CLI Oracle/Librarian/Review mode
  - from: &amp;quot;gemini-2.5-flash-lite-preview-09-2025&amp;quot;
    to: &amp;quot;gemini-3-flash-preview&amp;quot;
  - from: &amp;quot;gpt-5&amp;quot;
    to: &amp;quot;gemini-3-pro-preview&amp;quot;
  - from: &amp;quot;gpt-5.1&amp;quot;
    to: &amp;quot;gemini-3-pro-preview&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In fact, for the Smart mode, I sometimes run out of Opus access, so I can swap the &lt;code&gt;gemini-claude-opus-4-5-thinking&lt;/code&gt; with &lt;code&gt;gemini-claude-sonnet-4-5&lt;/code&gt; or &lt;code&gt;gemini-3-pro-preview&lt;/code&gt; instead.&lt;/p&gt;
&lt;p&gt;I made a script to automate changing my configuration.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/bin/bash

# Script to switch configuration and set model replacement

if [ $# -lt 1 ]; then
    echo &amp;quot;Usage: $0 &amp;lt;model-name&amp;gt; [source-config]&amp;quot;
    echo &amp;quot;&amp;quot;
    echo &amp;quot;Examples:&amp;quot;
    echo &amp;quot;  $0 gemini-claude-opus-4-5-thinking&amp;quot;
    echo &amp;quot;  $0 gemini-claude-sonnet-4-5&amp;quot;
    echo &amp;quot;  $0 gemini-3-pro-preview&amp;quot;
    echo &amp;quot;&amp;quot;
    echo &amp;quot;Optional: specify source config file (defaults to config.yaml.template)&amp;quot;
    exit 1
fi

MODEL=&amp;quot;$1&amp;quot;
SOURCE=&amp;quot;${2:-config.yaml.template}&amp;quot;
TARGET=&amp;quot;config.yaml&amp;quot;

if [ ! -f &amp;quot;$SOURCE&amp;quot; ]; then
    echo &amp;quot;Error: $SOURCE not found&amp;quot;
    exit 1
fi

# Replace all occurrences of &amp;quot;replace_this&amp;quot; with the specified model
sed &amp;quot;s/replace_this/$MODEL/g&amp;quot; &amp;quot;$SOURCE&amp;quot; &amp;gt; &amp;quot;$TARGET&amp;quot;
echo &amp;quot;Switched to model: $MODEL&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create &lt;code&gt;config.yaml.template&lt;/code&gt; file like this&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;port: 8317
api-keys:
  - &amp;quot;your-secret-key&amp;quot;

debug: true
quota-exceeded:
  switch-project: true
  switch-preview-model: true
ampcode:
  upstream-url: &amp;quot;https://ampcode.com&amp;quot;
  restrict-management-to-localhost: true
  model-mappings:
    # Model requested by Amp CLI Smart mode
    - from: &amp;quot;claude-opus-4.5&amp;quot;
      to: &amp;quot;replace_this&amp;quot; # routed to antigravity
    - from: &amp;quot;claude-opus-4-5-20251101&amp;quot;
      to: &amp;quot;replace_this&amp;quot;

    # Model requested by Amp CLI Rush mode
    - from: &amp;quot;claude-haiku-4-5-20251001&amp;quot;
      to: &amp;quot;gemini-3-flash-preview&amp;quot;

    # model requested by AMP CLI Oracle/Librarian/Review mode
    - from: &amp;quot;gemini-2.5-flash-lite-preview-09-2025&amp;quot;
      to: &amp;quot;gemini-3-flash-preview&amp;quot;
    - from: &amp;quot;gpt-5&amp;quot;
      to: &amp;quot;gemini-3-pro-preview&amp;quot;
    - from: &amp;quot;gpt-5.1&amp;quot;
      to: &amp;quot;gemini-3-pro-preview&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now I just need to run the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;./switch-config.sh gemini-claude-opus-4-5-thinking
./switch-config.sh gemini-claude-sonnet-4-5
./switch-config.sh gemini-3-pro-preview
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It replaces all &lt;code&gt;replace_this&lt;/code&gt; placeholders in config.yaml.template with your specified model and writes to config.yaml.&lt;/p&gt;
&lt;p&gt;The CLIProxyAPI is already watching the config file and will automatically reload the configuration when it changes.&lt;/p&gt;
&lt;h2&gt;Advanced Usage: IDE Extensions and Security&lt;/h2&gt;
&lt;p&gt;The beauty of this setup is that it isn&amp;#39;t limited to the terminal. You can use the same proxy for Amp’s IDE extensions in &lt;strong&gt;VS Code, Cursor, or Windsurf&lt;/strong&gt;. Simply open your IDE settings, set the &lt;strong&gt;Amp URL&lt;/strong&gt; to &lt;code&gt;http://localhost:8317&lt;/code&gt;, and provide the local API key you configured.&lt;/p&gt;
&lt;h3&gt;Security Hardening&lt;/h3&gt;
&lt;p&gt;Since you are running a proxy that handles sensitive OAuth tokens, security is paramount.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Localhost Restriction:&lt;/strong&gt; Enabling &lt;code&gt;restrict-management-to-localhost: true&lt;/code&gt; prevents remote access to your management endpoints. This uses the actual TCP connection address to block &amp;quot;drive-by&amp;quot; browser attacks or header spoofing.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API Key Auth:&lt;/strong&gt; Ensure &lt;code&gt;api-keys&lt;/code&gt; are configured in your &lt;code&gt;config.yaml&lt;/code&gt;. As of version 6.6.15, all management routes like &lt;code&gt;/api/auth&lt;/code&gt; and &lt;code&gt;/api/threads&lt;/code&gt; are protected by this middleware.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Frontier Power, Your Terms&lt;/h2&gt;
&lt;p&gt;By using CLIProxyAPI as a bridge, you turn Amp into a truly unconstrained tool. You get to keep Amp&amp;#39;s four core principles—unconstrained token usage, always using the best models, raw model power, and a system built to evolve—while leveraging the subscriptions you already pay for.&lt;/p&gt;
&lt;p&gt;Whether you are using the &lt;strong&gt;Oracle&lt;/strong&gt; for deep reasoning on a complex bug or spawning &lt;strong&gt;subagents&lt;/strong&gt; to refactor five files at once, this setup ensures that the only thing you are focused on is the code, not the credit balance.&lt;/p&gt;
</content:encoded></item><item><title>30 January 2025 at 14:32</title><link>https://rezhajul.io/notes/20250130432/</link><guid isPermaLink="true">https://rezhajul.io/notes/20250130432/</guid><description>30 January 2025 at 14:32</description><pubDate>Wed, 29 Jan 2025 21:32:12 GMT</pubDate><content:encoded>&lt;p&gt;I simply love how OpenAI lost it&amp;#39;s job to the AI before I lost my job to the AI.&lt;/p&gt;
</content:encoded></item><item><title>18 January 2025 at 18:36</title><link>https://rezhajul.io/notes/202501181836/</link><guid isPermaLink="true">https://rezhajul.io/notes/202501181836/</guid><description>18 January 2025 at 18:36</description><pubDate>Sat, 18 Jan 2025 11:36:12 GMT</pubDate><content:encoded>&lt;p&gt;Today has been quite productive!
I successfully integrated reblog and favourite counts from Mastodon into my blog.
It&amp;#39;s a nice improvement that makes my blog more interactive.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/uploads/2025/01/reblog-and-fav.png&quot; alt=&quot;Reblog and favorite count on my blog&quot;&gt;&lt;/p&gt;
</content:encoded></item><item><title>16 January 2025 at 15:36</title><link>https://rezhajul.io/notes/202501161536/</link><guid isPermaLink="true">https://rezhajul.io/notes/202501161536/</guid><description>16 January 2025 at 15:36</description><pubDate>Thu, 16 Jan 2025 08:36:12 GMT</pubDate><content:encoded>&lt;p&gt;I am starting a new habit of writing more this year. I think of myself not as a good writer, but that should be one of the reasons why I should write more. I will also try to write more in Bahasa Indonesia on a different blog, as I have become pretty tired of SSG.&lt;/p&gt;
</content:encoded></item><item><title>15 January 2025 at 17:31</title><link>https://rezhajul.io/notes/202501151731/</link><guid isPermaLink="true">https://rezhajul.io/notes/202501151731/</guid><description>15 January 2025 at 17:31</description><pubDate>Wed, 15 Jan 2025 10:31:12 GMT</pubDate><content:encoded>&lt;p&gt;I’d rather donate to the Mastodon nonprofit than to &amp;quot;Free Our Feeds,&amp;quot; with its multimillionaire board of directors.&lt;/p&gt;
</content:encoded></item><item><title>14 January 2025 at 18:42</title><link>https://rezhajul.io/notes/202501131848/</link><guid isPermaLink="true">https://rezhajul.io/notes/202501131848/</guid><description>14 January 2025 at 18:42</description><pubDate>Tue, 14 Jan 2025 01:42:00 GMT</pubDate><content:encoded>&lt;p&gt;Micro.blog pays $2,000 for the domain name each year. That&amp;#39;s expensive!&lt;/p&gt;
&lt;p&gt;(&lt;a href=&quot;https://www.manton.org/2025/01/10/automattic-and-blog.html&quot;&gt;via&lt;/a&gt;)&lt;/p&gt;
</content:encoded></item><item><title>13 January 2025 at 15:11</title><link>https://rezhajul.io/notes/202501130852/</link><guid isPermaLink="true">https://rezhajul.io/notes/202501130852/</guid><description>13 January 2025 at 15:11</description><pubDate>Mon, 13 Jan 2025 15:11:00 GMT</pubDate><content:encoded>&lt;p&gt;On &lt;a href=&quot;https://valatka.dev/2025/01/12/on-killer-uv-feature.html&quot;&gt;Uv has a killer feature you should know about&lt;/a&gt; (&lt;a href=&quot;https://news.ycombinator.com/item?id=42676432&quot;&gt;via&lt;/a&gt;), Lukas shared to quickly run scripts with the desired Python version and dependencies without leaving a trace on their system.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;uv run --python 3.12 --with pandas python
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I usually have multiple dependencies specified in requirements.txt, so here is how to run it with requirements.txt&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;uv run --python 3.12 --with-requirements requirements.txt python
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>12 January 2025 at 23:12</title><link>https://rezhajul.io/notes/202501122312/</link><guid isPermaLink="true">https://rezhajul.io/notes/202501122312/</guid><description>12 January 2025 at 23:12</description><pubDate>Sun, 12 Jan 2025 16:12:12 GMT</pubDate><content:encoded>&lt;p&gt;I am trying out &lt;a href=&quot;https://indieweb.org/POSSE&quot;&gt;POSSE&lt;/a&gt; from my blog to my socials.&lt;/p&gt;
</content:encoded></item><item><title>Migrating from Hugo to Zola</title><link>https://rezhajul.io/posts/migrating-to-zola/</link><guid isPermaLink="true">https://rezhajul.io/posts/migrating-to-zola/</guid><description>Yet another migration post, this time from Hugo to Zola.</description><pubDate>Sat, 11 Jan 2025 22:52:12 GMT</pubDate><content:encoded>&lt;p&gt;I have been using Hugo for my blog for a while now, and while I have been happy with it, I have been looking for a change. After some research and testing, I have decided to migrate my blog from Hugo to Zola. In this post, I will share my experience with the migration process and some of the reasons behind my decision.&lt;/p&gt;
&lt;h3&gt;Why I Decided to Migrate&lt;/h3&gt;
&lt;p&gt;My development journey started with Django. I have been using Django since version 1.6 when I started my career 11 years ago. After working as a Django developer for a long time, I have grown accustomed to its template syntax and structure. While Hugo&amp;#39;s Go templates are powerful, I find Django&amp;#39;s template system more intuitive and easier to work with. The familiarity and clarity of Django templates make development more enjoyable for me, and this was one factor that drew me to explore alternatives to Hugo.&lt;/p&gt;
&lt;p&gt;After exploring various static site generators, I settled on Zola for several reasons. First, Zola uses a template syntax similar to Django&amp;#39;s, which felt immediately familiar. While the build times are comparable to Hugo in my case, I appreciate Zola&amp;#39;s simpler and more straightforward configuration. Everything from the directory structure to the template organization feels more intuitive to me. Additionally, Zola comes with built-in syntax highlighting and search functionality out of the box, which were features I had to configure separately in Hugo.&lt;/p&gt;
&lt;h3&gt;The Migration Process&lt;/h3&gt;
&lt;p&gt;As I was already using TOML for my frontmatter, the migration process was relatively straightforward. I started by creating a new Zola site and copying over my content files from my Hugo site. I then updated the frontmatter in each file to match Zola&amp;#39;s format and made any necessary adjustments to the content itself. I also had to update my templates to work with Zola&amp;#39;s template syntax, which was a bit more involved but manageable.&lt;/p&gt;
&lt;p&gt;There are some differences between Hugo and Zola. For example, the tags and categories taxonomies are handled differently in Zola.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-toml&quot;&gt;# Hugo
tags = [&amp;quot;Hugo&amp;quot;, &amp;quot;Zola&amp;quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-toml&quot;&gt;# Zola
[taxonomies]
tags = [&amp;quot;Hugo&amp;quot;, &amp;quot;Zola&amp;quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Editing all of them would be a pain, so I wrote a small awk script to automate the process. Here is the script:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-awk&quot;&gt;BEGIN { in_frontmatter=0; has_taxonomies=0; tags=&amp;quot;&amp;quot; }
/^\+\+\+/ {
    if (in_frontmatter) {
        if (!has_taxonomies &amp;amp;&amp;amp; tags != &amp;quot;&amp;quot;) {
            print &amp;quot;\n[taxonomies]&amp;quot;
            print tags
        }
    }
    in_frontmatter = !in_frontmatter
    has_taxonomies = 0
    print
    next
}
in_frontmatter &amp;amp;&amp;amp; /^tags = / {
    tags = $0
    next
}
in_frontmatter &amp;amp;&amp;amp; /^\[taxonomies\]/ {
    has_taxonomies = 1
    if (tags != &amp;quot;&amp;quot;) {
        print
        print tags
        tags = &amp;quot;&amp;quot;
        next
    }
}
{ print }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save the script as &lt;code&gt;update_frontmatter.awk&lt;/code&gt; and run it on your content files like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;find content/posts -name &amp;quot;*.md&amp;quot; -exec sh -c &amp;#39;awk -f update_frontmatter.awk &amp;quot;$1&amp;quot; &amp;gt; &amp;quot;$1.tmp&amp;quot; &amp;amp;&amp;amp; mv &amp;quot;$1.tmp&amp;quot; &amp;quot;$1&amp;quot;&amp;#39; sh {} \;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This script will add the &lt;code&gt;[taxonomies]&lt;/code&gt; section to the frontmatter of each content file if it is missing and move the &lt;code&gt;tags&lt;/code&gt; field to the &lt;code&gt;[taxonomies]&lt;/code&gt; section. I haven&amp;#39;t tested it for all cases so be careful when using it.&lt;/p&gt;
&lt;h3&gt;Other Additions&lt;/h3&gt;
&lt;p&gt;I have also taken this opportunity to add Fediverse comments to my blog. I have been using Mastodon for a while now, and I wanted to integrate it into my blog. I found a blogpost by &lt;a href=&quot;https://carlschwan.eu/2020/12/29/adding-comments-to-your-static-blog-with-mastodon/&quot;&gt;Carl Schwan&lt;/a&gt; that explains how to add Mastodon comments to a static site using a simple JavaScript snippet. I followed his instructions and added the snippet to my templates, and now visitors can comment on my posts using their Fediverse accounts. You can try it out by commenting on this post!&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;The migration from Hugo to Zola was a smooth process overall, and I am happy with the results. Zola&amp;#39;s template system is more intuitive for me, and I appreciate the flexibility it offers. I am looking forward to exploring more of Zola&amp;#39;s features and customizing my blog further. If you are considering migrating from Hugo to Zola, I would recommend giving it a try and seeing if it fits your needs. See you in the next post!&lt;/p&gt;
</content:encoded></item><item><title>Several Cool Things You Can Do with F-Strings in Python</title><link>https://rezhajul.io/posts/cool-things-python-fstrings/</link><guid isPermaLink="true">https://rezhajul.io/posts/cool-things-python-fstrings/</guid><description>F-strings are a powerful and easy-to-use way to format strings in Python.</description><pubDate>Tue, 13 Feb 2024 14:52:12 GMT</pubDate><content:encoded>&lt;p&gt;F-strings are a powerful and easy-to-use way to format strings in Python. They are more readable than traditional string formatting methods and offer more flexibility. In this blog post, we will explore five useful F-string formatting tricks that you can use to make your code more readable and maintainable.&lt;/p&gt;
&lt;h1&gt;Use f-string to format number:&lt;/h1&gt;
&lt;p&gt;You can insert any character after the colon in an f-string to add them as thousand separators. For example, the following code will format the number 1234567 as 1,234,567:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;number = 1234567
print(f&amp;quot;The number is: {number:,}&amp;quot;)
print(f&amp;quot;The number is: {number:_}&amp;quot;)

=== Result ===
The number is: 1,234,567
The number is: 1_234_567
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Align text:&lt;/h1&gt;
&lt;p&gt;You can use the right arrow &amp;gt; to right-align text, the left arrow &amp;lt; for left-align, and the up arrow ^ to center-align text. You can also specify a fill character to fill the empty spaces. For example, the following code will right-align the text &amp;quot;Hello&amp;quot; with 20 spaces:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;text = &amp;quot;Hello&amp;quot;
print(f&amp;quot;{text:&amp;gt;20}&amp;quot;)
print(f&amp;quot;{text:&amp;lt;20}&amp;quot;)
print(f&amp;quot;{text:^20}&amp;quot;)

=== Result ===
               Hello
Hello
       Hello
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can also specify the character that you want to use to fill the empty space.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;text = &amp;quot;Hello&amp;quot;
print(f&amp;quot;{text:+&amp;gt;20}&amp;quot;)
print(f&amp;quot;{text:-&amp;lt;20}&amp;quot;)
print(f&amp;quot;{text:=^20}&amp;quot;)

=== Result ===
+++++++++++++++Hello
Hello---------------
=======Hello========
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Format date and time:&lt;/h1&gt;
&lt;p&gt;You can use date time format specifiers like &lt;code&gt;%d&lt;/code&gt;, &lt;code&gt;%m&lt;/code&gt;, and &lt;code&gt;%Y&lt;/code&gt; to format date and time in an f-string. You can also use &lt;code&gt;%I&lt;/code&gt;, &lt;code&gt;%p&lt;/code&gt;, &lt;code&gt;%H&lt;/code&gt;, &lt;code&gt;%M&lt;/code&gt;, and &lt;code&gt;%S&lt;/code&gt; to format the time in 12-hour format, 24-hour format, etc. For example, the following code will format the current date and time to &lt;code&gt;%Y-%m-%d %H:%M:%S&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from datetime import datetime

now = datetime.now()
print(f&amp;quot;The current date and time is: {now:%Y-%m-%d %H:%M:%S}&amp;quot;)

=== Result ===
The current date and time is: 2024-02-13 19:24:06
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can check &lt;a href=&quot;https://strftime.org/&quot;&gt;this site&lt;/a&gt; to find other date time format specifiers.&lt;/p&gt;
&lt;h1&gt;Round numbers:&lt;/h1&gt;
&lt;p&gt;You can use the f specifier followed by the number of decimal places to round a number to that many decimal places. For example, &lt;code&gt;:.2f&lt;/code&gt; rounds a number to two decimal places. The following code will round the number 3.14159 to two decimal places:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;number = 3.14159
print(f&amp;quot;The number rounded to two decimal places is: {number:.2f}&amp;quot;)


=== Result ===
The number rounded to two decimal places is: 3.14
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Debug code using f-strings:&lt;/h1&gt;
&lt;p&gt;You can add an equal sign and an expression inside the curly braces of an f-string to evaluate the expression and print the result. This can be useful for debugging code because it shows you the value of the expression at that point in the code. For example, the following code will print the value of the variable x:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;x = 10
print(f&amp;quot;The value of x is: {x=}&amp;quot;)

=== Result ===
The value of x is: x=10
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I hope these five tips help you write more readable and maintainable Python code!&lt;/p&gt;
&lt;p&gt;In addition to the tips in the post, here are some other things to keep in mind when using F-strings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;F-strings can be used to format any type of data, not just strings.&lt;/li&gt;
&lt;li&gt;You can use f-strings inside other f-strings.&lt;/li&gt;
&lt;li&gt;You can use f-strings to create multi-line strings.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I hope this blog post has been helpful! Please let me know if you have any questions.&lt;/p&gt;
</content:encoded></item><item><title>Upgrade Major PostgreSQL version on Docker</title><link>https://rezhajul.io/posts/upgrade-postgre-on-docker/</link><guid isPermaLink="true">https://rezhajul.io/posts/upgrade-postgre-on-docker/</guid><description>Easily upgrade PostgreSQL in docker, without losing data</description><pubDate>Tue, 03 Jan 2023 14:51:12 GMT</pubDate><content:encoded>&lt;p&gt;Upgrading a PostgreSQL database server to a newer version is a common task for database administrators and developers. In this post, we will discuss how to upgrade a PostgreSQL database server running in a Docker container.&lt;/p&gt;
&lt;p&gt;On 1 January, I decided to upgrade the PostgreSQL used by my Mastodon instance. It was using PostgreSQL 12 and I think it could use the upgrade to PostgreSQL 15 for that &amp;quot;extra performance and features&amp;quot;. For the upgrade process, basically you need to run &lt;code&gt;pg_upgrade&lt;/code&gt; tool. &lt;code&gt;pg_upgrade&lt;/code&gt; allows data stored in PostgreSQL data files to be upgraded to a later PostgreSQL major version without the data dump/restore typically required for major version upgrades, e.g., from 9.5.8 to 9.6.4 or from 10.7 to 11.2. It is not required for minor version upgrades, e.g., from 9.6.2 to 9.6.3 or from 10.1 to 10.2.&lt;/p&gt;
&lt;p&gt;Fortunately, I found a &lt;a href=&quot;https://github.com/tianon/docker-postgres-upgrade&quot;&gt;github repository from Tianon&lt;/a&gt; that contains a ready-to-use container image to upgrade PostgreSQL. I will share my experience using them.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Shutting down the database: I am using docker-compose for the deployment, so I just run &lt;code&gt;docker-compose stop&lt;/code&gt; to stop the whole deployment of the Mastodon stack.&lt;/li&gt;
&lt;li&gt;Backing up the data: &lt;code&gt;docker-compose.yml&lt;/code&gt; I am using is already using volume, so I just simply copied the folder with new name &lt;code&gt;cp -r postgre12 old&lt;/code&gt;. Notice I am using &lt;code&gt;old&lt;/code&gt; as my folder name. This folder will be used as a source directory for &lt;code&gt;pg_upgrade&lt;/code&gt; later.&lt;/li&gt;
&lt;li&gt;Make new directory for new data: &lt;code&gt;mkdir new&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Pull the docker image: &lt;code&gt;docker pull tianon/postgres-upgrade:12-to-15&lt;/code&gt;. Please adjust with the Postgre version you have and the target version you want. See above repo to check whether your version is available.&lt;/li&gt;
&lt;li&gt;Run the docker image and wait: &lt;code&gt;docker run --rm -v /path/to/folder/mastodon/old:/var/lib/postgresql/12/data -v /path/to/folder/mastodon/new:/var/lib/postgresql/15/data tianon/postgres-upgrade:12-to-15&lt;/code&gt;. Make sure the version of image and version to postgresql path is already correct&lt;/li&gt;
&lt;li&gt;Rename the folder: I rename the &lt;code&gt;new&lt;/code&gt; folder to &lt;code&gt;postgres15&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Adjust your docker-compose for the new database folder and docker image. See example below:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Before:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  db:
    restart: always
    image: postgres:12-alpine
    shm_size: 256mb
    networks:
      - internal_network
    healthcheck:
      test: [&amp;#39;CMD&amp;#39;, &amp;#39;pg_isready&amp;#39;, &amp;#39;-U&amp;#39;, &amp;#39;postgres&amp;#39;]
    volumes:
      - ./postgres12:/var/lib/postgresql/data
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  db:
   restart: always
   image: postgres:15-alpine
   shm_size: 256mb
   networks:
     - internal_network
   healthcheck:
     test: [&amp;#39;CMD&amp;#39;, &amp;#39;pg_isready&amp;#39;, &amp;#39;-U&amp;#39;, &amp;#39;postgres&amp;#39;]
   volumes:
     - ./postgres15:/var/lib/postgresql/data
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I ran &lt;code&gt;docker-compose up -d&lt;/code&gt; to check whether everything works perfectly. You might want to check and compare the configuration on &lt;code&gt;pg_hba.conf&lt;/code&gt; and &lt;code&gt;postgresql.conf&lt;/code&gt; first beforehand. I had an issue where the new &lt;code&gt;pg_hba.conf&lt;/code&gt; only listened to localhost.&lt;/p&gt;
</content:encoded></item><item><title>2023</title><link>https://rezhajul.io/posts/2023/</link><guid isPermaLink="true">https://rezhajul.io/posts/2023/</guid><description>Hello, 2023!</description><pubDate>Mon, 02 Jan 2023 14:51:12 GMT</pubDate><content:encoded>&lt;p&gt;Happy New Year!&lt;/p&gt;
&lt;p&gt;It&amp;#39;s hard to believe that another year has come and gone, and we&amp;#39;re already settling into the first few days of the new year. As we look back on the past year, it&amp;#39;s natural to reflect on the challenges and accomplishments of the past 12 months. It&amp;#39;s also a time to set new goals and resolutions for the year ahead.&lt;/p&gt;
&lt;p&gt;If you&amp;#39;re anything like me, you might be feeling a mix of excitement and nerves as you think about all the possibilities and potential changes the new year brings. But no matter what the future holds, one thing is certain: it&amp;#39;s always a good time to hit the reset button and start fresh.&lt;/p&gt;
&lt;p&gt;So, as we embark on this new journey together, let&amp;#39;s make the most of it. Let&amp;#39;s embrace new opportunities, take on new challenges, and make the coming year our best one yet. Here&amp;#39;s to a happy, healthy, and successful new year!&lt;/p&gt;
</content:encoded></item><item><title>Hey there</title><link>https://rezhajul.io/posts/hey-there/</link><guid isPermaLink="true">https://rezhajul.io/posts/hey-there/</guid><description>Hey there... Long time no see.</description><pubDate>Sun, 04 Dec 2022 18:51:12 GMT</pubDate><content:encoded>&lt;p&gt;It&amp;#39;s been a few years since I last sat down to write a blog post, and a lot has changed in that time. The biggest change, of course, is the loss of my father this year. It&amp;#39;s been a difficult and emotional journey, but I&amp;#39;ve been trying my best to keep moving forward and find ways to honor his memory.&lt;/p&gt;
&lt;p&gt;One of the things that has helped me through this difficult time is reconnecting with old hobbies and passions. For me, that means getting back to writing. It&amp;#39;s been a great outlet for me to express my thoughts and feelings, and I&amp;#39;ve found that it brings me a sense of peace and clarity.&lt;/p&gt;
&lt;p&gt;I&amp;#39;ve also been spending more time outdoors, exploring new places and rediscovering the beauty of nature. I find that being in nature helps me to feel more connected to the world around me, and it reminds me of the endless possibilities that life has to offer.&lt;/p&gt;
&lt;p&gt;In the past few months, I&amp;#39;ve also been trying to focus on the things that bring me joy and make me happy. I&amp;#39;ve reconnected with old friends and made some new ones, and I&amp;#39;ve started taking on new challenges and trying new things. It&amp;#39;s been a great way to keep moving forward and to keep my mind and body active.&lt;/p&gt;
&lt;p&gt;Of course, there are still days when the loss of my father feels overwhelming, and I struggle to find the motivation to keep going. But I remind myself that he would want me to keep living my life to the fullest, and to make the most of every day. I know that he would be proud of me for continuing to find ways to honor his memory and to keep moving forward, no matter what life throws my way.&lt;/p&gt;
&lt;p&gt;So here I am, back to writing again after all these years. It feels good to be back, and I&amp;#39;m looking forward to sharing my thoughts and experiences with all of you. Thanks for sticking with me, and for being a part of this journey.&lt;/p&gt;
</content:encoded></item><item><title>What happened to self-hosted blogs?</title><link>https://rezhajul.io/posts/what-happened-to-self-hosted-blog/</link><guid isPermaLink="true">https://rezhajul.io/posts/what-happened-to-self-hosted-blog/</guid><description>Platforms such as Medium have made the majority of original self-hosted blogs a matter of the past. Personally, for the whole blogging community, I believe that is terrible.</description><pubDate>Sun, 19 May 2019 12:51:12 GMT</pubDate><content:encoded>&lt;p&gt;I remember a while ago when all of us ran a personal blog on the Internet. And I mean personal, not hosted on some side platform or an addition to their website. I mean personal.&lt;/p&gt;
&lt;p&gt;Companies and individuals are now using Medium platforms to host and support all their articles, essays and case studies. I understand the draw and can even list the positive elements:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Under the Medium brand there is already a large community.&lt;/li&gt;
&lt;li&gt;Promoting your own work and following others is easy.&lt;/li&gt;
&lt;li&gt;The platform can be set up and implemented relatively easily.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Unfortunately, this has had a very severe impact on the blogging community - nobody controls their own blogs. It was an interesting and fun experience for me when I found a new blog:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;How did they choose to design the page?&lt;/li&gt;
&lt;li&gt;What typefaces did they choose to use?&lt;/li&gt;
&lt;li&gt;What are they using as a back-end?&lt;/li&gt;
&lt;li&gt;How does it look and feel on your mobile phone?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These personalized self-hosted blogs have inspired other developers to build their own or tweak current blogs. In some ways this was a small factor when we pushed what we can do further and further on the web as developers went on to compete with each other.&lt;/p&gt;
&lt;p&gt;I also think this inspired people to write better content instead of choosing clickbait garbage to get &amp;quot;featured&amp;quot; or boosted promotion on the main blogging platform, but I don&amp;#39;t think that&amp;#39;s the worst thing to come from this mass migration to a single blogging platform.&lt;/p&gt;
&lt;p&gt;I&amp;#39;m not sure if it is the intention of Medium, but I personally believe that it is awful either way. The personality of most design and development blogs has been completely removed from them. All blogs look the same now.&lt;/p&gt;
&lt;p&gt;Perhaps I was just a salty developer, with a narrow, pessimistic perspective about where our bloggers seem to lead – or perhaps I have only higher standards.&lt;/p&gt;
</content:encoded></item><item><title>Your Own Python Calendar</title><link>https://rezhajul.io/posts/your-own-python-calendar/</link><guid isPermaLink="true">https://rezhajul.io/posts/your-own-python-calendar/</guid><description>The Python calendar module defines the Calendar class. This is used for various date calculations as well as TextCalendar and HTMLCalendar classes with their local subclasses, used for rendering pre-formatted output.</description><pubDate>Mon, 23 Jul 2018 10:51:12 GMT</pubDate><content:encoded>&lt;p&gt;The Python &lt;code&gt;calendar&lt;/code&gt; module defines the Calendar class. This is used for various date calculations as well as TextCalendar and HTMLCalendar classes with their local subclasses, used for rendering pre-formatted output.&lt;/p&gt;
&lt;p&gt;Import the module:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import calendar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Print the current month:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import calendar
year = 2016
month = 1
cal = calendar.month(year, month)
print(cal)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output will look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;           January 2016
      Mo Tu We Th Fr Sa Su
                   1  2  3
       4  5  6  7  8  9 10
      11 12 13 14 15 16 17
      18 19 20 21 22 23 24
      25 26 27 28 29 30 31
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Set the first day of the week as Sunday:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;calendar.setfirstweekday(calendar.SUNDAY)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To print a whole year&amp;#39;s calendar:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(calendar.calendar(2016))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Output not shown since it is too large.&lt;/p&gt;
&lt;p&gt;This module provides other useful methods for working with dates, times and calendars such as &lt;code&gt;calendar.isleap&lt;/code&gt; (checks if a year is a leap year).&lt;/p&gt;
</content:encoded></item><item><title>Get the Most of Floats</title><link>https://rezhajul.io/posts/get-the-most-of-floats/</link><guid isPermaLink="true">https://rezhajul.io/posts/get-the-most-of-floats/</guid><description>Floats also have several additional methods useful in various scenarios</description><pubDate>Sun, 22 Jul 2018 09:42:55 GMT</pubDate><content:encoded>&lt;p&gt;Similar to the &lt;code&gt;int&lt;/code&gt; data type, &lt;code&gt;floats&lt;/code&gt; also have several additional methods useful in various scenarios.&lt;/p&gt;
&lt;p&gt;For example, you can directly check if the float number is actually an integer with &lt;code&gt;is_integer()&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; (5.9).is_integer()
False
&amp;gt;&amp;gt;&amp;gt; (-9.0).is_integer()
True
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Integer values might be preferred over floats in some cases and you can convert a &lt;code&gt;float&lt;/code&gt; to a &lt;code&gt;tuple&lt;/code&gt; matching a fraction with integer values:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; (-5.5).as_integer_ratio()
(-11,2)
# -11 / 2 == -5.5
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As floats&amp;#39; numbers representation in binary is not really human-friendly and tends to be lengthier with precision, the hexadecimal format is preferred. Such hexadecimal representations have the form:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;[sign][&amp;#39;0x&amp;#39;]int[&amp;#39;.&amp;#39; fraction][&amp;#39;p&amp;#39; exponent]
# e.g 0x1.8000000000000p+0 -&amp;gt; 1.5
# 1.5 in decimal is 1.8 in hex
# 0 - sign
# int - str of hex. digits of integer part
# fraction - same for fractional part
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To convert a float number to a hex string you can use the hex() function.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; (3.14).hex()
&amp;#39;0x1.91eb851eb851fp+1&amp;#39;
&amp;gt;&amp;gt;&amp;gt; float.hex(1.5)
&amp;#39;0x1.8000000000000p+0&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The reverse can be achieved with the fromhex() class method:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; float.fromhex(&amp;#39;0x1.91eb851eb851fp+1&amp;#39;)
3.14
&amp;gt;&amp;gt;&amp;gt; float.fromhex(&amp;#39;0x1.8000000000000p+0&amp;#39;)
1.5
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Format text paragraphs with textwrap</title><link>https://rezhajul.io/posts/format-text-paragraphs-with-textwrap/</link><guid isPermaLink="true">https://rezhajul.io/posts/format-text-paragraphs-with-textwrap/</guid><description>Python&apos;s textwrap module is useful for rearranging text, e.g. wrapping and filling lines.</description><pubDate>Sat, 21 Jul 2018 10:52:49 GMT</pubDate><content:encoded>&lt;p&gt;Python&amp;#39;s textwrap module is useful for rearranging text, e.g. wrapping and filling lines.&lt;/p&gt;
&lt;p&gt;Import the module:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import textwrap
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Wrap the text in the string &amp;quot;parallel&amp;quot;, so that all lines are a maximum of x characters long:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# When x = 2
textwrap.wrap(&amp;quot;parallel&amp;quot;, width=2)
# Output:
# [&amp;#39;pa&amp;#39;, &amp;#39;ra&amp;#39;, &amp;#39;ll&amp;#39;, &amp;#39;el&amp;#39;]

# When x = 4
textwrap.wrap(&amp;quot;parallel&amp;quot;, width=4)
# Output:
# [&amp;#39;para&amp;#39;, &amp;#39;llel&amp;#39;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Returns a list of lines (without trailing newlines).&lt;/p&gt;
&lt;p&gt;If we would like to include trailing newlines (&lt;code&gt;\n&lt;/code&gt;) after each string of a certain width we can either use the following syntax:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;#39;\n&amp;#39;.join(textwrap.wrap(&amp;#39;text&amp;#39;, width=2))
# Output:
# &amp;#39;te\nxt&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or we can use the fill method implemented in textwrap module:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;textwrap.fill(&amp;quot;text&amp;quot;, width=2)
# Output:
# &amp;#39;te\nxt&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Collapse and truncate a text to width:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(textwrap.shorten(&amp;quot;Hello world!&amp;quot;, width=12))
# Hello world!
print(textwrap.shorten(&amp;quot;Hello world!&amp;quot;, width=11))
# Hello [...]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The last words are dropped if the text is longer than the width argument. Other useful methods like &lt;code&gt;indent&lt;/code&gt; and &lt;code&gt;dedent&lt;/code&gt; are available in this module.&lt;/p&gt;
</content:encoded></item><item><title>Unicode Character Database at Your Hand</title><link>https://rezhajul.io/posts/unicode-character-database-at-your-hand/</link><guid isPermaLink="true">https://rezhajul.io/posts/unicode-character-database-at-your-hand/</guid><description>Python&apos;s self-explanatory module called unicodedata provides the user with access to the Unicode Character Database and implicitly every character&apos;s properties.</description><pubDate>Fri, 20 Jul 2018 15:59:10 GMT</pubDate><content:encoded>&lt;p&gt;Python&amp;#39;s self-explanatory module called &lt;code&gt;unicodedata&lt;/code&gt; provides the user with access to the Unicode Character Database and implicitly every character&amp;#39;s properties.&lt;/p&gt;
&lt;p&gt;Lookup a character by name with &lt;code&gt;lookup&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; import unicodedata
&amp;gt;&amp;gt;&amp;gt; unicodedata.lookup(&amp;#39;RIGHT SQUARE BRACKET&amp;#39;)
&amp;#39;]&amp;#39;
&amp;gt;&amp;gt;&amp;gt; three_wise_monkeys = [&amp;quot;SEE-NO-EVIL MONKEY&amp;quot;,
                          &amp;quot;HEAR-NO-EVIL MONKEY&amp;quot;,
                          &amp;quot;SPEAK-NO-EVIL MONKEY&amp;quot;]
&amp;gt;&amp;gt;&amp;gt; &amp;#39;&amp;#39;.join(map(unicodedata.lookup, three_wise_monkeys))
&amp;#39;🙈🙉🙊&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Get a character&amp;#39;s name with &lt;code&gt;name&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; unicodedata.name(u&amp;#39;~&amp;#39;)
&amp;#39;TILDE&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Get the &lt;code&gt;category&lt;/code&gt; of a character:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; unicodedata.category(u&amp;#39;X&amp;#39;)
&amp;#39;Lu&amp;#39;
# L = letter, u = uppercase
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Also, using the &lt;code&gt;unicodedata&lt;/code&gt; Python module, it&amp;#39;s easy to normalize any unicode data strings (remove accents, etc):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; import unicodedata

data = u&amp;#39;ïnvéntìvé&amp;#39;
normal = unicodedata.normalize(&amp;#39;NFKD&amp;#39;, data).\
    encode(&amp;#39;ASCII&amp;#39;, &amp;#39;ignore&amp;#39;)
print(normal)
# b&amp;#39;inventive&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;NFKD&lt;/code&gt; stands for Normalization Form Compatibility Decomposition, and this is where characters are decomposed by compatibility, also multiple combining characters are arranged in a specific order.&lt;/p&gt;
&lt;p&gt;To get the version of the Unicode Database currently used:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; unicodedata.unidata_version
&amp;#39;8.0.0&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Read more methods at the &lt;a href=&quot;https://docs.python.org/3.7/library/unicodedata.html&quot;&gt;python documentation&lt;/a&gt;&lt;/p&gt;
</content:encoded></item><item><title>Functional Particularities of Python</title><link>https://rezhajul.io/posts/functional-particularities-of-python/</link><guid isPermaLink="true">https://rezhajul.io/posts/functional-particularities-of-python/</guid><description>Paradigms of functional programming as applied to Python</description><pubDate>Thu, 19 Jul 2018 16:10:00 GMT</pubDate><content:encoded>&lt;p&gt;This time we&amp;#39;ll explore some of the paradigms of functional programming as applied to Python, specifically.&lt;/p&gt;
&lt;p&gt;One of the most common constructs to employ in a functional-style Python program is a Generator. Generators are a special class of functions that make writing iterators [1] easier.&lt;/p&gt;
&lt;p&gt;When we call a function, a new space in memory is allocated to hold all of the variables that function is concerned with, as well as other data. When that function reaches a return statement, all of that is destroyed (or, more accurately, marked for garbage collection) and the return value is given back to the caller. Calling that function again restarts the entire process.&lt;/p&gt;
&lt;p&gt;Generators allow us to make functions which essentially keep these variables from being destroyed after returning a value, and allow the execution to be paused and resumed where the function left off. Generators basically provide resumable functions. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def generate_ints(N):
    for i in range(N):
        yield i
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a simple generator, identified by the &lt;code&gt;yield&lt;/code&gt; keyword. (Any function with a &lt;code&gt;yield&lt;/code&gt; is a &lt;code&gt;generator&lt;/code&gt;.) When it is called, instead of returning a value, a &lt;code&gt;generator&lt;/code&gt; object is returned instead which supports the &lt;code&gt;iterator&lt;/code&gt; protocol. Calling &lt;code&gt;next()&lt;/code&gt; on the generator object will continually run and return the result, &amp;quot;pausing&amp;quot; the function every time after it reaches &lt;code&gt;yield&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Another interesting feature of functional programming with Python is Comprehensions. Using Comprehensions is an easy way to make functional code much more legible by focusing on what is to be computed, not how.&lt;/p&gt;
&lt;p&gt;A comprehension is an expression where the same flow control keywords used in loops and conditionals are used, but put in a different order to focus on data instead of the manipulation procedure. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# without comprehension
collection = []
for element in some_list:
    if cond_1(element) and cond_2(element):
        collection.append(element)
    else:
        new = mutate(element)
        collection.append(new)

# with comprehension
collection = [e if cond_1(e) and cond_2(e) else mutate(e) for e in some_list]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can clearly see, our code instantly becomes much more legible and comprehensible.&lt;/p&gt;
&lt;p&gt;Finally, Python is also lucky to have an avid user base which is constantly providing new third-party libraries to extend Python&amp;#39;s usability as a functional language. Although we can&amp;#39;t cover them in detail in order to keep the post short, some highlights include &lt;code&gt;pyrsistent&lt;/code&gt;, &lt;code&gt;toolz&lt;/code&gt;, &lt;code&gt;hypothesis&lt;/code&gt; and &lt;code&gt;more_itertools&lt;/code&gt;. [2]&lt;/p&gt;
&lt;hr align=&quot;center&quot;&gt;

&lt;h4&gt;FOOTNOTES&lt;/h4&gt;
&lt;p&gt;1 ITERATORS&lt;/p&gt;
&lt;p&gt;Iterators are objects which return one value at a time from a collection of values. For example, an iterator traversing a list will return one element of the list at a time until it reaches the end of the list and throws a StopIteration exception. For more information, see the related insights relating to Python iteration.&lt;/p&gt;
&lt;p&gt;2 THIRD PARTY LIBRARIES&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pyrsistent&lt;/code&gt; is a collection of a number of useful persistent data structures, AKA immutable data structures. &lt;code&gt;toolz&lt;/code&gt; provides a set of utility functions for iterators, functions and dictionaries. &lt;code&gt;hypothesis&lt;/code&gt; is a library which allows simple and powerful property-based testing. &lt;code&gt;more-itertools&lt;/code&gt; is exactly what it says; the library provides additional building blocks, recipes and routines above the standard itertools.&lt;/p&gt;
</content:encoded></item><item><title>Debug Bad Commit with Binary Search</title><link>https://rezhajul.io/posts/debug-bad-commit-with-binary-search/</link><guid isPermaLink="true">https://rezhajul.io/posts/debug-bad-commit-with-binary-search/</guid><description>The git bisect tool helps to identify the commit that introduced a bug.</description><pubDate>Tue, 17 Jul 2018 18:30:48 GMT</pubDate><content:encoded>&lt;p&gt;Let’s say you just pushed out a release of your code to a production environment, you’re getting bug reports about something that wasn’t happening in your development environment, and you can’t imagine why the code is doing that. You go back to your code, and it turns out you can reproduce the issue, but you can’t figure out what is going wrong.&lt;/p&gt;
&lt;p&gt;The git bisect tool helps to identify the commit that introduced a bug.&lt;/p&gt;
&lt;p&gt;To start, you need to tell git when the code last ran without problems. Assuming you are currently in a place where the code fails:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git bisect start
$ git bisect bad # current commit has a bug
# checkout to a known good commit
$ git bisect good v2.1 # v2.1 passes the test
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;git will then check out the commit that is halfway between the good and bad commits. You test the code and if there was no problem:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git bisect good
# test halfway between middle and bad commit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If there was a problem:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git bisect bad
# test halfway between middle and good commit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You continue doing this until git identifies the first bad commit.&lt;/p&gt;
&lt;p&gt;When you finish you need to reset to the original state to reset your HEAD to where you were before you started, or you’ll end up in a weird state:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git bisect reset
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Number Extensions in Javascript</title><link>https://rezhajul.io/posts/number-extensions-in-javascript/</link><guid isPermaLink="true">https://rezhajul.io/posts/number-extensions-in-javascript/</guid><description>Number benefits from several changes in ES6, providing several new methods, saving us from writing our own potentially error-prone implementations.</description><pubDate>Tue, 17 Jul 2018 10:08:06 GMT</pubDate><content:encoded>&lt;p&gt;Number benefits from several changes in ES6, providing several new methods, saving us from writing our own potentially error-prone implementations. There are quite a lot of methods so here are some of the ones that are likely to have more use:&lt;/p&gt;
&lt;h2&gt;Number.isFinite&lt;/h2&gt;
&lt;p&gt;Determines whether a number is finite (finite means that it could be measured or have a value).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;Number.isFinite(Infinity); //false
Number.isFinite(100); //true
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Number.isInteger&lt;/h2&gt;
&lt;p&gt;Determines if a number is an integer or not.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;Number.isInteger(1); // true
Number.isInteger(0.1); //false
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Number.isNaN&lt;/h2&gt;
&lt;p&gt;Before ES6 it was difficult to test if a value was equal to &lt;code&gt;NaN&lt;/code&gt; (Not a number). This is because &lt;code&gt;NaN&lt;/code&gt; == &lt;code&gt;NaN&lt;/code&gt; evaluates to false.&lt;/p&gt;
&lt;p&gt;Whilst a global &lt;code&gt;isNaN&lt;/code&gt; function has existed in previous versions it has the issue that it converts values which makes it hard to test if something is really &lt;code&gt;NaN&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;isNaN(&amp;quot;rezha&amp;quot;) == true; //true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;Number.isNaN&lt;/code&gt; allows you to easily test if a number really is &lt;code&gt;NaN&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;Number.isNaN(1); //false
Number.isNaN(Number.NaN); //true
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Number.EPSILON&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;Number.EPSILON&lt;/code&gt; is the smallest value less than 1 that can be represented as a number and is intended for advanced uses such as testing equality:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;Number.EPSILON;
//2.220446049250313e-16
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Number.isSafeInteger&lt;/h2&gt;
&lt;p&gt;To be considered a safe integer numbers must be able to be represented in a format called IEEE-754 and cannot be the result of rounding any other IEEE-754 number. There are some numbers that fall outside of what can be represented using IEEE-754:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;Number.isSafeInteger(3); //true
var unsafe = Math.pow(2, 53);
Number.isSafeInteger(unsafe); //false
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Number.MIN_SAFE_INTEGER and Number.MAX_SAFE_INTEGER&lt;/h2&gt;
&lt;p&gt;IEEE-754 can represent a limited range of numbers. This range can be retrieved using Number.MIN_SAFE_INTEGER and Number.MAX_SAFE_INTEGER:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;Number.MIN_SAFE_INTEGER; //-9007199254740991
Number.MAX_SAFE_INTEGER; //9007199254740991
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>New Blog, New Domain</title><link>https://rezhajul.io/posts/new-blog-new-domain/</link><guid isPermaLink="true">https://rezhajul.io/posts/new-blog-new-domain/</guid><description>New Blog, New Domain</description><pubDate>Mon, 16 Jul 2018 10:33:17 GMT</pubDate><content:encoded>&lt;p&gt;For the last 2 years, I think I have been guilty of abusing this domain. The domain is rezhajulio.id which is an Indonesian special TLD, yet I have been using it for writing English content (well actually nothing&amp;#39;s wrong with it). Also, there is a domain that I really want: &lt;a href=&quot;https://rezhajul.io&quot;&gt;rezhajul.io&lt;/a&gt;. It&amp;#39;s really concise, has my name on it and ends with IO (input output) and that sounds so &amp;quot;techie&amp;quot; I think.&lt;/p&gt;
&lt;p&gt;Well then I am impulsively buying the domain almost a month ago and just yesterday set up a blog on it with the help from &lt;a href=&quot;https://gohugo.io&quot;&gt;Hugo&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;What&amp;#39;s next?&lt;/h3&gt;
&lt;p&gt;My new blog is already hosted on &lt;a href=&quot;https://netlify.com&quot;&gt;Netlify&lt;/a&gt;, which is a far superior static site hosting than Firebase hosting. I will move this one to Netlify too and redirect people to rezhajul.io based on geo IP location. I&amp;#39;ll keep my new blog simple, unlike the old one which has more features than most of the static site, making the site build process too long.&lt;/p&gt;
&lt;p&gt;Welcome!&lt;/p&gt;
</content:encoded></item><item><title>CSS Grid on Production</title><link>https://rezhajul.io/posts/css-grid-on-production/</link><guid isPermaLink="true">https://rezhajul.io/posts/css-grid-on-production/</guid><description>CSS Grid which allows far greater control than we&apos;ve ever had before</description><pubDate>Tue, 30 May 2017 20:56:00 GMT</pubDate><content:encoded>&lt;p&gt;Today we have new tools/toys like CSS Grid which allows far greater control than we&amp;#39;ve ever had before, but at the same time, it allows us to let go of our content and allow it to find its own natural fit within the constraints of our design.&lt;/p&gt;
&lt;p&gt;Personally, I&amp;#39;ve been looking at CSS Grid as a way to force elements to go where I want them to which is certainly one way to look at it, but it also offers a more natural &amp;quot;fit where I can&amp;quot; approach which I&amp;#39;m beginning to explore and having a lot of fun with.&lt;/p&gt;
&lt;p&gt;While you&amp;#39;re getting started with CSS Grid you&amp;#39;re sure to be thinking that you can&amp;#39;t use this in production because it&amp;#39;s &lt;a href=&quot;http://caniuse.com/#feat=css-grid&quot;&gt;only supported in the latest browsers&lt;/a&gt;. While the support is true it doesn&amp;#39;t preclude you from starting to use it. Every browser that does support CSS Grids also supports &lt;code&gt;@supports&lt;/code&gt;. This means that you can do something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;.wrapper {
  max-width: 1200px;
  margin: 0 20px;
  display: grid;
  grid-gap: 10px;
  grid-template-columns: 1fr 3fr;
  grid-auto-rows: minmax(150px, auto);
}
/*float the side bar to the left and give it a width, but also tell it the grid column to go in  if Grid is supported */
.sidebar {
  float: left;
  width: 19.1489%;
  grid-column: 1;
  grid-row: 2;
}
/*float the content to the right and give it a width, but also tell it the grid column to go in  if Grid is supported */
.content {
  float: right;
  width: 79.7872%;
  grid-column: 2;
  grid-row: 2;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will give you a site that works in all browsers that do not support Grid because it&amp;#39;s ignored and looks at the floats and widths. Then if Grid is supported then we want to remove the widths otherwise the elements will take up that width of the grid track rather than the parent.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;@supports (display: grid) {
.wrapper &amp;amp;amp;gt; * {
  width: auto;
  margin: 0;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Voila&lt;/em&gt;, you have a fallback layout &lt;strong&gt;AND&lt;/strong&gt; grid in one.&lt;/p&gt;
</content:encoded></item><item><title>New interesting data structures in Python 3</title><link>https://rezhajul.io/posts/new-interesting-data-structures-in-python-3/</link><guid isPermaLink="true">https://rezhajul.io/posts/new-interesting-data-structures-in-python-3/</guid><description>take a look at some data structures that Python 3 offers, but that are not available in Python 2</description><pubDate>Thu, 25 May 2017 00:35:17 GMT</pubDate><content:encoded>&lt;p&gt;Python 3&amp;#39;s uptake is dramatically on the rise these days, and I think therefore that it is a good time to take a look at some data structures that Python 3 offers, but that are not available in Python 2.&lt;/p&gt;
&lt;p&gt;We will take a look at &lt;code&gt;typing.NamedTuple&lt;/code&gt;, &lt;code&gt;types.MappingProxyType&lt;/code&gt; and &lt;code&gt;types.SimpleNamespace&lt;/code&gt;, all of which are new to Python 3.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;typing.NamedTuple&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;typing.NamedTuple&lt;/code&gt; is a supercharged version of the venerable &lt;code&gt;collections.namedtuple&lt;/code&gt; and while it was added in Python 3.5, it really came into its own in Python 3.6.&lt;/p&gt;
&lt;p&gt;In comparison to &lt;code&gt;collections.namedtuple&lt;/code&gt;, &lt;code&gt;typing.NamedTuple&lt;/code&gt; gives you (Python &amp;gt;= 3.6):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;nicer syntax&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;inheritance&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;type annotations&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;default values (python &amp;gt;= 3.6.1)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;equally fast&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See an illustrative &lt;code&gt;typing.NamedTuple&lt;/code&gt; example below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; from typing import NamedTuple

&amp;gt;&amp;gt;&amp;gt; class Student(NamedTuple):
&amp;gt;&amp;gt;&amp;gt;     name: str
&amp;gt;&amp;gt;&amp;gt;     address: str
&amp;gt;&amp;gt;&amp;gt;     age: int
&amp;gt;&amp;gt;&amp;gt;     sex: str

&amp;gt;&amp;gt;&amp;gt; tommy = Student(name=&amp;#39;Tommy Johnson&amp;#39;, address=&amp;#39;Main street&amp;#39;,     age=22, sex=&amp;#39;M&amp;#39;)
&amp;gt;&amp;gt;&amp;gt; tommy
    Student(name=&amp;#39;Tommy Johnson&amp;#39;, address=&amp;#39;Main street&amp;#39;, age=22, sex=&amp;#39;M&amp;#39;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I like the class-based syntax compared to the old function-based syntax, and find this much more readable.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;Student&lt;/code&gt; class is a subclass of &lt;code&gt;tuple&lt;/code&gt;, so it can be handled like any normal &lt;code&gt;tuple&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; isinstance(tommy, tuple)
    True
&amp;gt;&amp;gt;&amp;gt; tommy[0]
    &amp;#39;Tommy Johnson&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A more advanced example, subclassing &lt;code&gt;Student&lt;/code&gt; and using default values (note: default values require Python &amp;gt;= &lt;strong&gt;3.6.1&lt;/strong&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; class MaleStudent(Student):
&amp;gt;&amp;gt;&amp;gt;     sex: str = &amp;#39;M&amp;#39;  # default value, requires Python &amp;gt;= 3.6.1

&amp;gt;&amp;gt;&amp;gt;  MaleStudent(name=&amp;#39;Tommy Johnson&amp;#39;, address=&amp;#39;Main street&amp;#39;, age=22)
     MaleStudent(name=&amp;#39;Tommy Johnson&amp;#39;, address=&amp;#39;Main street&amp;#39;, age=22, sex=&amp;#39;M&amp;#39;)  # note that sex defaults to &amp;#39;M&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In short, this modern version of namedtuples is just super-nice, and will no doubt become the standard namedtuple variation in the future.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&quot;https://docs.python.org/3/library/typing.html#typing.NamedTuple&quot;&gt;docs&lt;/a&gt; for further details.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;types.MappingProxyType&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;types.MappingProxyType&lt;/code&gt; is used as a read-only dict and was added in Python 3.3.&lt;/p&gt;
&lt;p&gt;That &lt;code&gt;types.MappingProxyType&lt;/code&gt; is read-only means that it can&amp;#39;t be directly manipulated and if users want to make changes, they have to deliberately make a copy, and make changes to that copy. This is perfect if you&amp;#39;re handing a &lt;code&gt;dict&lt;/code&gt; -like structure over to a data consumer, and you want to ensure that the data consumer is not unintentionally changing the original data. This is often extremely useful, as cases of data consumers changing passed-in data structures leads to very obscure bugs in your code that are difficult to track down.&lt;/p&gt;
&lt;p&gt;A &lt;code&gt;types.MappingProxyType&lt;/code&gt; example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt;  from  types import MappingProxyType
&amp;gt;&amp;gt;&amp;gt;  data = {&amp;#39;a&amp;#39;: 1, &amp;#39;b&amp;#39;:2}
&amp;gt;&amp;gt;&amp;gt;  read_only = MappingProxyType(data)
&amp;gt;&amp;gt;&amp;gt;  del read_only[&amp;#39;a&amp;#39;]
TypeError: &amp;#39;mappingproxy&amp;#39; object does not support item deletion
&amp;gt;&amp;gt;&amp;gt;  read_only[&amp;#39;a&amp;#39;] = 3
TypeError: &amp;#39;mappingproxy&amp;#39; object does not support item assignment
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that the example shows that the &lt;code&gt;read_only&lt;/code&gt; object cannot be directly changed.&lt;/p&gt;
&lt;p&gt;So, if you want to deliver data dicts to different functions or threads and want to ensure that a function is not changing data that is also used by another function, you can just deliver a &lt;code&gt;MappingProxyType&lt;/code&gt; object to all functions, rather than the original &lt;code&gt;dict&lt;/code&gt;, and the data dict now cannot be changed unintentionally. An example illustrates this usage of &lt;code&gt;MappingProxyType&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt;  def my_func(in_dict):
&amp;gt;&amp;gt;&amp;gt;     ...  # lots of code
&amp;gt;&amp;gt;&amp;gt;     in_dict[&amp;#39;a&amp;#39;] *= 10  # oops, a bug, this will change the sent-in dict

...
# in some function/thread:
&amp;gt;&amp;gt;&amp;gt;  my_func(data)
&amp;gt;&amp;gt;&amp;gt;  data
data = {&amp;#39;a&amp;#39;: 10, &amp;#39;b&amp;#39;:2}  # oops, note that data[&amp;#39;a&amp;#39;] now has changed as an side-effect of calling my_func
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you send in a &lt;code&gt;mappingproxy&lt;/code&gt; to &lt;code&gt;my_func&lt;/code&gt; instead, however, attempts to change the dict will result in an error:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt;  my_func(MappingProxyType(data))
TypeError: &amp;#39;mappingproxy&amp;#39; object does not support item deletion
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We now see that we have to correct the code in &lt;code&gt;my_func&lt;/code&gt; to first copy &lt;code&gt;in_dict&lt;/code&gt; and then alter the copied dict to avoid this error. This feature of &lt;code&gt;mappingproxy&lt;/code&gt; is great, as it helps us avoid a whole class of difficult-to-find bugs.&lt;/p&gt;
&lt;p&gt;Note though that while &lt;code&gt;read_only&lt;/code&gt; is read-only, it is not immutable, so if you change &lt;code&gt;data&lt;/code&gt;, &lt;code&gt;read_only&lt;/code&gt; will change too:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt;  data[&amp;#39;a&amp;#39;] = 3
&amp;gt;&amp;gt;&amp;gt;  data[&amp;#39;c&amp;#39;] = 4
&amp;gt;&amp;gt;&amp;gt;  read_only  # changed!
mappingproxy({&amp;#39;a&amp;#39;: 3, &amp;#39;b&amp;#39;: 2, &amp;#39;c&amp;#39;: 4})
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We see that &lt;code&gt;read_only&lt;/code&gt; is actually a view of the underlying &lt;code&gt;dict&lt;/code&gt;, and is not an independent object. This is something to be aware of. See the &lt;a href=&quot;https://docs.python.org/3/library/types.html#types.MappingProxyType&quot;&gt;docs&lt;/a&gt; for further details.&lt;/p&gt;
&lt;h3&gt;&lt;code&gt;types.SimpleNamespace&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;types.SimpleNamespace&lt;/code&gt; is a simple class that provides attribute access to its namespace, as well as a meaningful repr. It was added in Python 3.3.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt;  from types import SimpleNamespace

&amp;gt;&amp;gt;&amp;gt;  data = SimpleNamespace(a=1, b=2)
&amp;gt;&amp;gt;&amp;gt;  data
namespace(a=1, b=2)
&amp;gt;&amp;gt;&amp;gt;  data.c = 3
&amp;gt;&amp;gt;&amp;gt;  data
namespace(a=1, b=2, c=3)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In short, &lt;code&gt;types.SimpleNamespace&lt;/code&gt; is just an ultra-simple class, allowing you to set, change and delete attributes while it also provides a nice repr output string.&lt;/p&gt;
&lt;p&gt;I sometimes use this as an easier-to-read-and-write alternative to &lt;code&gt;dict&lt;/code&gt;. More and more though, I subclass it to get the flexible instantiation and repr output for free:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt;  import random

&amp;gt;&amp;gt;&amp;gt;  class DataBag(SimpleNamespace):
&amp;gt;&amp;gt;&amp;gt;     def choice(self):
&amp;gt;&amp;gt;&amp;gt;         items = self.__dict__.items()
&amp;gt;&amp;gt;&amp;gt;         return random.choice(tuple(items))

&amp;gt;&amp;gt;&amp;gt;  data_bag = DataBag(a=1, b=2)
&amp;gt;&amp;gt;&amp;gt;  data_bag
DataBag(a=1, b=2)
&amp;gt;&amp;gt;&amp;gt;  data_bag.choice()
(b, 2)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This subclassing of &lt;code&gt;types.SimpleNamespace&lt;/code&gt; is not revolutionary really, but it can save on a few lines of text in some very common cases, which is nice. See the &lt;a href=&quot;https://docs.python.org/3/library/types.html#types.SimpleNamespace&quot;&gt;docs&lt;/a&gt; for details.&lt;/p&gt;
</content:encoded></item><item><title>Keyword argument demystify</title><link>https://rezhajul.io/posts/keyword-argument-demistify/</link><guid isPermaLink="true">https://rezhajul.io/posts/keyword-argument-demistify/</guid><description>There’s a lot of baffling among Python programmers on what exactly &apos;keyword arguments&apos; are</description><pubDate>Mon, 22 May 2017 14:00:00 GMT</pubDate><content:encoded>&lt;p&gt;There’s a lot of baffling among Python programmers on what exactly “keyword arguments” are. Let’s go through some of them.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If you somehow are writing for a Python 3 only codebase, I highly recommend making all your keyword arguments keyword only, especially keyword arguments that represent “options”.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;There are many problems with this sentence. The first is that this is mixing up “arguments” (i.e. things at the call site) and “parameters” (i.e. things you declare when defining a function). So:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def foo(a, b):  # &amp;lt;- a and b are &amp;quot;parameters&amp;quot; or &amp;quot;formal arguments&amp;quot;
    pass

foo(1, 2)  # &amp;lt;- 1 and 2 are arguments to foo, that match a and b
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This confusion is common among programmers. I also use the word “argument” when I mean “parameter”, because normally in conversation we can tell the difference in context. Even the documentation in the Python standard library uses these as synonyms.&lt;/p&gt;
&lt;p&gt;The code above is the basic case with positional arguments. But we were talking about keyword arguments so let’s talk about those too:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def bar(a,    # &amp;lt;- this parameter is a normal python parameter
        b=1,  # &amp;lt;- this is a parameter with a default value
        *,    # &amp;lt;- all parameters after this are keyword only
        c=2,  # &amp;lt;- keyword only argument with default value
        d):   # &amp;lt;- keyword only argument without default value
    pass
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So far so good. Now, let’s think about the statement we started with:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I highly recommend making all your keyword arguments keyword only&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That implies there are keyword arguments that are not keyword only arguments. That’s sort of correct, but also very wrong. Let’s have some examples of usages of bar :&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;bar(1)         # one positional argument
bar(1, 2)      # two positional arguments
bar(a=1)       # one keyword argument
bar(a=1, b=2)  # two keyword arguments
bar(1, d=2)    # one positional and one keyword argument
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The trick here is to realize that a “keyword argument” is a concept of the call site, not the declaration. But a “keyword only argument” is a concept of the declaration, not the call site. Super confusing!&lt;/p&gt;
&lt;p&gt;There are also parameters that are positional only. The function sum in the standard library is like this: according to the documentation it looks like this:sum(iterable[, start]) But there’s a catch!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; sum(iterable=[1, 2])
Traceback (most recent call last):
File &amp;quot;&amp;lt;stdin&amp;gt;&amp;quot;, line 1, in &amp;lt;module&amp;gt;
TypeError: sum() takes no keyword arguments
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And the start parameter can’t be used as a keyword argument either, even though it’s optional!
Recap&lt;/p&gt;
&lt;p&gt;(I’m using “argument” here even though “parameter” or “formal argument” would be more correct, but the Python standard library uses these all as synonyms so I will too, so my wording matches the documentation.)&lt;/p&gt;
&lt;p&gt;Python functions can have :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Arguments that can be used both as positional and keyword arguments (this is the most common case)&lt;/li&gt;
&lt;li&gt;Arguments that can be used both as positional and keyword arguments with default values (or just “arguments with default values”)&lt;/li&gt;
&lt;li&gt;Positional only arguments (like the first argument tosum, this is uncommon and can only be achieved by functions implemented in C)&lt;/li&gt;
&lt;li&gt;Positional only arguments with default values (like above, only for C)&lt;/li&gt;
&lt;li&gt;Optional positional only arguments (2nd argument to sum, like above, only for C)&lt;/li&gt;
&lt;li&gt;Keyword only arguments&lt;/li&gt;
&lt;li&gt;Keyword only arguments with default values&lt;/li&gt;
&lt;li&gt;Arbitrary positional arguments ( *args )&lt;/li&gt;
&lt;li&gt;Arbitrary keyword arguments ( **kwargs )&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When calling Python functions you can have:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Positional arguments&lt;/li&gt;
&lt;li&gt;Keyword arguments&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It’s very simple at the call site, but a lot more complex at the function definition, and how call site arguments are mapped to the declaration is quite complex.&lt;/p&gt;
&lt;h3&gt;Summary&lt;/h3&gt;
&lt;p&gt;Python appears simple because most of these rules and distinctions are so well thought out that many programmers can go years in a professional career and believe default arguments and keyword arguments are the same, and never get bitten by this incorrect belief.&lt;/p&gt;
</content:encoded></item><item><title>Looping techniques in Python</title><link>https://rezhajul.io/posts/looping-techniques-in-python/</link><guid isPermaLink="true">https://rezhajul.io/posts/looping-techniques-in-python/</guid><description>Python has multiple techniques for looping over data structures</description><pubDate>Thu, 04 May 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Python has multiple techniques for looping over data structures.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dictionary&lt;/strong&gt; looping with both key and value can be done using the &lt;code&gt;items()&lt;/code&gt; method:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;my_dict = {&amp;#39;first&amp;#39;: &amp;#39;a&amp;#39;, &amp;#39;second&amp;#39;: &amp;#39;b&amp;#39;}
for k, v in my_dict.items():
    print(k, v)
# first a
# second b
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;enumerate()&lt;/code&gt; function allows looping with both &lt;code&gt;index&lt;/code&gt; and &lt;code&gt;value&lt;/code&gt; through any &lt;strong&gt;sequence&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;my_list = [&amp;#39;a&amp;#39;, &amp;#39;b&amp;#39;]
for i, v in enumerate(my_list):
    print(i, v)
# 0 a
# 1 b
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;zip()&lt;/code&gt; function can be used to pair two or more &lt;strong&gt;sequences&lt;/strong&gt; in order to loop over both of them in &lt;em&gt;parallel&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;first_list = [&amp;#39;a&amp;#39;, &amp;#39;b&amp;#39;]
second_list = [&amp;#39;one&amp;#39;, &amp;#39;two&amp;#39;]
for f, s in zip(first_list, second_list):
    print(f, s)
# a one
# b two
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To loop in a sorted order, use the &lt;code&gt;sorted()&lt;/code&gt; function:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;my_list = [&amp;#39;b&amp;#39;, &amp;#39;c&amp;#39;, &amp;#39;a&amp;#39;]
for f in sorted(my_list):
    print(f)
# a
# b
# c
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To loop in reverse, pass the sorted list to the &lt;code&gt;reversed()&lt;/code&gt; function:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;for f in reversed(sorted(set(my_list))):
  print(f)
# c
# b
# a
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Enhance your tuples</title><link>https://rezhajul.io/posts/enhance-your-tuples/</link><guid isPermaLink="true">https://rezhajul.io/posts/enhance-your-tuples/</guid><description>Standard Python `tuple`s are lightweight sequences of immutable objects, yet their implementation may prove inconvenient in some scenarios</description><pubDate>Wed, 03 May 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Standard Python &lt;code&gt;tuple&lt;/code&gt;s are lightweight sequences of immutable objects, yet their implementation may prove inconvenient in some scenarios.&lt;/p&gt;
&lt;p&gt;Instead, the &lt;code&gt;collections&lt;/code&gt; module provides an enhanced version of a tuple, &lt;code&gt;namedtuple&lt;/code&gt;, that makes member access more natural (rather than using integer indexes).&lt;/p&gt;
&lt;p&gt;Import &lt;code&gt;namedtuple&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from collections import namedtuple
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a &lt;code&gt;namedtuple&lt;/code&gt; object:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;point = namedtuple(&amp;#39;3DPoint&amp;#39;, &amp;#39;x y z&amp;#39;)
A = point(x=3, y=5, z=6)
print(A)
# 3DPoint(x=3, y=5, z=6)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Access a specific member:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(A.x)
# 3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Because namedtuples are backwards compatible with normal tuples, member access can be also done with indexes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(A[0])
# 3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To convert a namedtuple to a dict (actually an &lt;code&gt;OrderedDict&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(A._asdict())
#OrderedDict([(&amp;#39;x&amp;#39;, 3), (&amp;#39;y&amp;#39;, 5), (&amp;#39;z&amp;#39;, 6)])
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Get more with collections!</title><link>https://rezhajul.io/posts/get-more-with-collections/</link><guid isPermaLink="true">https://rezhajul.io/posts/get-more-with-collections/</guid><description>In addition to Python&apos;s built-in data structures (such as `tuple`s, `dict`s, and `list`s), a library module called `collections` provides data structures with additional features, some of which are specializations of the built-in ones</description><pubDate>Tue, 02 May 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;In addition to Python&amp;#39;s built-in data structures (such as &lt;code&gt;tuple&lt;/code&gt;s, &lt;code&gt;dict&lt;/code&gt;s, and &lt;code&gt;list&lt;/code&gt;s), a library module called &lt;code&gt;collections&lt;/code&gt; provides data structures with additional features, some of which are specializations of the built-in ones.&lt;/p&gt;
&lt;p&gt;Import the module:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import collections
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Specialized container datatypes are usually &lt;code&gt;dict&lt;/code&gt; subclasses or wrappers around other classes like lists, tuples, etc.&lt;/p&gt;
&lt;p&gt;Notable implementations are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the &lt;code&gt;Counter&lt;/code&gt; class used for counting hashable objects.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;defaultdict&lt;/code&gt; class used as a faster implementation of a specialised dictionary.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;namedtuple&lt;/code&gt; class used for defining a meaning for every position in a tuple, often useful with databases or CSV files.&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>There is more to copying</title><link>https://rezhajul.io/posts/there-is-more-to-copying/</link><guid isPermaLink="true">https://rezhajul.io/posts/there-is-more-to-copying/</guid><description>An assignment only creates a binding (an association) between a name and a target (object of some type). A copy is sometimes necessary so you can change the value of one object without changing the other (when two names are pointing to the same object)</description><pubDate>Mon, 01 May 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;An &lt;strong&gt;assignment&lt;/strong&gt; only creates a &amp;quot;binding&amp;quot; (an association) between a name and a &amp;quot;target&amp;quot; (object of some type). A &lt;strong&gt;copy&lt;/strong&gt; is sometimes necessary so you can change the value of one object without changing the other (when two names are pointing to the same object).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Assignment: bind the name y to
# the list [1, 2].
y = [1, 2 ]
# Create another binding -
# bind the name x to the same
# object that y is currently bound to.
x = y
# x[0 ] is changed too, when y[0 ] is.
y[0] = 99
print(x[0])
# 99
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The copy module has methods to support both shallow and deep copying of objects.&lt;/p&gt;
&lt;p&gt;To create a &lt;strong&gt;shallow&lt;/strong&gt; copy (construct a new object, but references to the objects found in the original are inserted):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from copy import copy

y = [1, 2 ]
x = copy(y)
# note that x = y.copy() works
y[0] = 99
print(x[0])
# 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To create a &lt;strong&gt;deep&lt;/strong&gt; copy (instead of references, multiple copies are used):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from copy import deepcopy
#...
x = deepcopy(y)
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Implementing weak references in Python</title><link>https://rezhajul.io/posts/implementing-weak-references-in-python/</link><guid isPermaLink="true">https://rezhajul.io/posts/implementing-weak-references-in-python/</guid><description>Normal Python references to objects increment the object&apos;s reference count thus preventing it from being garbage collected</description><pubDate>Sun, 30 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Normal Python references to objects increment the object&amp;#39;s reference count thus preventing it from being &lt;strong&gt;garbage collected&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;If a user desires creating &lt;strong&gt;weak references&lt;/strong&gt;, the &lt;code&gt;weakref&lt;/code&gt; module can be used:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import weakref

class Rezha(object): pass
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To create a weak reference, the &lt;code&gt;ref&lt;/code&gt; class is used:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# object instance
rezha = Rezha()
# weak reference to our object
r = weakref.ref(rezha)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, you can call the reference object:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(r)
# &amp;lt;weakref at 0x01414E40; to &amp;#39;Rezha&amp;#39;...&amp;gt;
print(r())
# &amp;lt;__main__.Rezha object at 0x0133D270&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the reference no longer exists, calling it returns &lt;code&gt;None&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;del rezha
print(r())
# None
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To check the existence of the reference:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;if r is not None:
  # reference exists!
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Weak references are often used to implement caching for large objects, but also for implementing cyclic references.&lt;/p&gt;
&lt;p&gt;Imagine the case where object X references object Y and Y references X. Without a cycle-detecting garbage collector, the two objects would never be garbage collected. However, if one of the references is &lt;strong&gt;weak&lt;/strong&gt;, they will be properly garbage collected.&lt;/p&gt;
</content:encoded></item><item><title>Translating Scanner tokens into primitive types</title><link>https://rezhajul.io/posts/translating-scanner-tokens-into-primitive-types/</link><guid isPermaLink="true">https://rezhajul.io/posts/translating-scanner-tokens-into-primitive-types/</guid><description>Translating Scanner tokens into primitive types</description><pubDate>Sat, 29 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;The &lt;code&gt;Scanner&lt;/code&gt; class can be used to break an input into tokens separated by delimiters. &lt;code&gt;Scanner&lt;/code&gt; also has methods to translate each token into one of Java&amp;#39;s primitive types.&lt;/p&gt;
&lt;p&gt;Delimiters are string &lt;em&gt;patterns&lt;/em&gt; and are set in the following way:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;Scanner s = new Scanner(input);
s.useDelimiter(pattern);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If we need to read a file containing a set of integers, and calculate the total, we can use &lt;code&gt;Scanner&lt;/code&gt;&amp;#39;s &lt;code&gt;nextInt&lt;/code&gt; method:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;int sum = 0;
//create a new instance of Scanner
Scanner s = new Scanner(
  new BufferedReader(
    new FileReader(&amp;quot;in.txt&amp;quot;)));
/*tokenize the input file and
convert to integers*/
while (s.hasNext()) {
  if (s.hasNextInt()) {
    sum += s.nextInt();
  } else {
    s.next();
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the example above, we iterate through the file, tokenizing each value and converting it to an integer before adding it to &lt;code&gt;sum&lt;/code&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Listing a file system&apos;s root directories</title><link>https://rezhajul.io/posts/listing-a-file-systems-root-directories/</link><guid isPermaLink="true">https://rezhajul.io/posts/listing-a-file-systems-root-directories/</guid><description>Listing a file system&apos;s root directories</description><pubDate>Fri, 28 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Using the static &lt;code&gt;FileSystems&lt;/code&gt; class, you can get the default &lt;code&gt;FileSystem&lt;/code&gt; (note the missing &lt;code&gt;s&lt;/code&gt;) through the &lt;code&gt;getDefault()&lt;/code&gt; method.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;getRootDirectories()&lt;/code&gt; method of &lt;code&gt;FileSystem&lt;/code&gt; can be used to acquire a list of root directories on a file system.&lt;/p&gt;
&lt;p&gt;An object of type &lt;code&gt;Iterable&lt;/code&gt; is returned, so the directories can be iterated over in the following way:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;FileSystem sys = FileSystems.getDefault();
Iterable&amp;lt;Path&amp;gt; d = sys.getRootDirectories();
for (Path name: d) {
    System.out.println(name);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is available from Java 1.7.&lt;/p&gt;
</content:encoded></item><item><title>The Console class</title><link>https://rezhajul.io/posts/the-console-class/</link><guid isPermaLink="true">https://rezhajul.io/posts/the-console-class/</guid><description>The Console class</description><pubDate>Thu, 27 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;The &lt;code&gt;Java.io.Console&lt;/code&gt; class provides methods to access the character-based console device, if any, associated with the current Java Virtual Machine. This class is attached to the &lt;code&gt;System&lt;/code&gt; console internally.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;Console&lt;/code&gt; class provides means to read text and passwords from the console.&lt;/p&gt;
&lt;p&gt;To read a line as a &lt;code&gt;String&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;Console console = System.console();
String myString = console.readLine();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To read a password as an array of chars:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;Console console = System.console();
char[] pw = console.readPassword();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you read passwords using &lt;code&gt;Console&lt;/code&gt; class, it will not be displayed to the user.&lt;/p&gt;
&lt;p&gt;Keep in mind that this class does not have a high level of security and it is mostly used at development stage.&lt;/p&gt;
</content:encoded></item><item><title>Next, Function or Method ?</title><link>https://rezhajul.io/posts/next-function-or-method/</link><guid isPermaLink="true">https://rezhajul.io/posts/next-function-or-method/</guid><description>Next, Function or Method ?</description><pubDate>Tue, 25 Apr 2017 17:56:15 GMT</pubDate><content:encoded>&lt;p&gt;While in Python 2 it was possible to use both the function &lt;code&gt;next()&lt;/code&gt; and the &lt;code&gt;.next()&lt;/code&gt; method to iterate over the resulting values of a generator, the latter has been removed with the introduction of Python 3.&lt;/p&gt;
&lt;p&gt;Consider the sample generator:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def sample_generator():
    yield &amp;quot;a&amp;quot;
    yield &amp;quot;b&amp;quot;
    yield &amp;quot;c&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In Python 2:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;a = sample_generator()
print(next(a)) # prints &amp;#39;a&amp;#39;
print(a.next()) # prints &amp;#39;b&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But in Python 3:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(next(a)) # prints &amp;#39;a&amp;#39;
print(a.next()) # AttributeError
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Generator Expressions</title><link>https://rezhajul.io/posts/generator-expressions/</link><guid isPermaLink="true">https://rezhajul.io/posts/generator-expressions/</guid><description>Generator Expressions</description><pubDate>Mon, 24 Apr 2017 00:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Generator expressions are a high performance and memory efficient generalization of list comprehensions and generators.
Imagine we want to sum up all even numbers ranging from 1 to 100.&lt;/p&gt;
&lt;p&gt;Using list comprehension:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;even_sum = sum([x for x in range(1, 100)
               if x % 2 == 0])
print(even_sum)
#2450
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will prove inefficient in the case of a large range because it first creates a list, it iterates over it and then returns the sum.
The same result can be achieved with a generator expression:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;even_sum = sum(x for x in range(1, 100)
               if x % 2 == 0)
print(even_sum)
#2450
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The generator expressions syntax says that it must be enclosed inside parentheses (). A generator for squares of numbers:&lt;/p&gt;
&lt;p&gt;squares = (x * x for x in range(1,10))&lt;/p&gt;
&lt;p&gt;This generator can now be converted to a list with:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(list(squares))
# [1, 4, 9, 16, 25, 36, 49, 64, 81]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or, iterate over it with a for loop:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;for item in squares: print(item)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This&amp;#39;ll print nothing, since a generator can only be iterated over once. To access values from a generator more than once, either save the values in a list, or define and then run the generator again.&lt;/p&gt;
</content:encoded></item><item><title>Yield Keyword</title><link>https://rezhajul.io/posts/yield-keyword/</link><guid isPermaLink="true">https://rezhajul.io/posts/yield-keyword/</guid><description>Yield Keyword</description><pubDate>Sun, 23 Apr 2017 00:33:17 GMT</pubDate><content:encoded>&lt;p&gt;The &lt;code&gt;yield&lt;/code&gt; keyword is fundamental to the creation of &lt;strong&gt;generators&lt;/strong&gt;.
Consider the following generator function:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def createGenerator():
    print(&amp;#39;Initial call&amp;#39;)
    yield &amp;#39;1&amp;#39;
    print(&amp;#39;Second call&amp;#39;)
    yield &amp;#39;2&amp;#39;

a = createGenerator()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Calling the &lt;code&gt;createGenerator()&lt;/code&gt; function will create a generator object stored as &lt;code&gt;a&lt;/code&gt;. Note that the code inside the generator function will not be run yet.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(next(a))
# Initial call
# 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The first time the generator object is iterated over (in a loop or with &lt;code&gt;next()&lt;/code&gt;), the function code will be run from the start until the first &lt;code&gt;yield&lt;/code&gt;. The value in the &lt;code&gt;yield&lt;/code&gt; statement is returned and the current position in the code is saved internally.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(next(a))
# Second call
# 2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The second &lt;code&gt;next&lt;/code&gt; call will resume the code from just after the previous &lt;code&gt;yield&lt;/code&gt; and will continue running it until another &lt;code&gt;yield&lt;/code&gt; is found where it returns the desired value.&lt;/p&gt;
&lt;p&gt;When there are no more &lt;code&gt;yield&lt;/code&gt; keywords, the generator object is considered &lt;strong&gt;empty&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(next(a)) # StopIteration error
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>How to Build Nginx with Google Pagespeed Support</title><link>https://rezhajul.io/posts/how-to-build-nginx-with-google-pagespeed-support/</link><guid isPermaLink="true">https://rezhajul.io/posts/how-to-build-nginx-with-google-pagespeed-support/</guid><description>How to Build Nginx with Google Pagespeed Support</description><pubDate>Sat, 22 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Nginx (engine-x) is an open source and high-performance HTTP server, reverse proxy and IMAP/POP3 proxy server. The outstanding features of Nginx are stability, a rich feature set, simple configuration and low memory consumption. This tutorial shows how to build a Nginx .deb package for Ubuntu 16.04 from source that has Google PageSpeed module compiled in.&lt;/p&gt;
&lt;p&gt;PageSpeed is a web server module developed by Google to speed up the website response time, optimize the returned HTML and reduce the page load time. ngx_pagespeed features including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Image optimization: stripping meta-data, dynamic resizing, recompression.&lt;/li&gt;
&lt;li&gt;CSS &amp;amp; JavaScript minification, concatenation, inlining, and outlining.&lt;/li&gt;
&lt;li&gt;Small resource inlining.&lt;/li&gt;
&lt;li&gt;Deferring image and JavaScript loading.&lt;/li&gt;
&lt;li&gt;HTML rewriting.&lt;/li&gt;
&lt;li&gt;Cache lifetime extension.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;see more &lt;a href=&quot;https://developers.google.com/speed/pagespeed/module/&quot;&gt;https://developers.google.com/speed/pagespeed/module/&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Install the build dependencies&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt-get install dpkg-dev build-essential zlib1g-dev libpcre3 libpcre3-dev unzip
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Installing nginx with ngx_pagespeed&lt;/h3&gt;
&lt;h4&gt;Step 1 - Add the nginx repository&lt;/h4&gt;
&lt;p&gt;Create a new repository file /etc/apt/sources.list.d/nginx.list with your favourite editor.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;nano /etc/apt/sources.list.d/nginx.list
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There you add the lines:
Save the file and exit the editor.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;deb http://nginx.org/packages/ubuntu/ xenial nginx
deb-src http://nginx.org/packages/ubuntu/ xenial nginx
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add the key and update the repository:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys ABF5BD827BD9BF62
sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Step 2 - Download nginx 1.12 from ubuntu repository&lt;/h4&gt;
&lt;p&gt;Create a new directory for the nginx source files and download the nginx sources with apt:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd ~
mkdir -p ~/compile/nginx_source/
cd ~/compile/nginx_source/
apt-get source nginx
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Sometimes, there is an error: &amp;#39;packages cannot be authenticated&amp;#39;.
You can solve it by typing command below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;rm -rf /var/lib/apt/lists/
apt-get update
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, install all dependencies to build the nginx package.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;apt-get build-dep nginx
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Step 3 - Download Pagespeed&lt;/h4&gt;
&lt;p&gt;Create a new directory for PageSpeed and download the PageSpeed source.
In this tutorial, we will use pagespeed 1.11.33.4 (latest stable one)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mkdir -p ~/compile/ngx_pagespeed/
cd ~/compile/ngx_pagespeed/
export ngx_version=1.11.33.4
wget https://github.com/pagespeed/ngx_pagespeed/archive/release-${ngx_version}.zip
unzip release-${ngx_version}.zip

cd ngx_pagespeed-release-${ngx_version}/
wget https://dl.google.com/dl/page-speed/psol/${ngx_version}.tar.gz
tar -xf ${ngx_version}.tar.gz
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Step 4 - Configure nginx to build with Pagespeed&lt;/h4&gt;
&lt;p&gt;Go to the &amp;#39;nginx_source&amp;#39; directory and edit the &amp;#39;rules&amp;#39; file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd ~/compile/nginx_source/nginx-1.12.0-1/debian/
vim rules
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add this parameter under &lt;code&gt;config.status.nginx&lt;/code&gt; and &lt;code&gt;config.status.nginx_debug&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;--add-module=~/compile/ngx_pagespeed/ngx_pagespeed-release-1.11.33.3-beta
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save and exit.&lt;/p&gt;
&lt;h4&gt;Step 5 - Build the nginx Ubuntu package and install it&lt;/h4&gt;
&lt;p&gt;Go to the nginx source directory and build nginx from source with the dpkg-buildpackage command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd ~/compile/nginx_source/nginx-1.12.0-1/
dpkg-buildpackage -b
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The nginx Ubuntu package will be saved under ~/compile/nginx_source/. Once package building is complete, please look in the directory:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd ~/compile/nginx_source/
ls
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Nginx Ubuntu package has been built.
And install nginx and modules deb with dpkg command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;dpkg -i *.deb
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Testing&lt;/h3&gt;
&lt;h4&gt;Step 1 - Testing with the Nginx Command&lt;/h4&gt;
&lt;p&gt;Run &lt;code&gt;nginx -V&lt;/code&gt; to check that the ngx_pagespeed module has been built into nginx.&lt;/p&gt;
&lt;h4&gt;Step 2 - Testing with Curl Command&lt;/h4&gt;
&lt;p&gt;Go to nginx configuration directory and edit default virtual host configuration file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd /etc/nginx/
nano nginx.conf
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Paste configuration below to enable ngx_pagespeed.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pagespeed on;

# Needs to exist and be writable by nginx.  Use tmpfs for best performance.
pagespeed FileCachePath /var/ngx_pagespeed_cache;

# Ensure requests for pagespeed optimized resources go to the pagespeed handler
# and no extraneous headers get set.
location ~ &amp;quot;\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+&amp;quot; {
  add_header &amp;quot;&amp;quot; &amp;quot;&amp;quot;;
}
location ~ &amp;quot;^/pagespeed_static/&amp;quot; { }
location ~ &amp;quot;^/ngx_pagespeed_beacon$&amp;quot; { }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save and exit.
Next, test the nginx configuration file and make sure there is no error:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;nginx -t
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Restart nginx:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;systemctl restart nginx
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, access the nginx web server with the curl command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -I your-ip-address
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>What are Generators?</title><link>https://rezhajul.io/posts/what-are-generators/</link><guid isPermaLink="true">https://rezhajul.io/posts/what-are-generators/</guid><description>What are Generators?</description><pubDate>Fri, 21 Apr 2017 00:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Generators are special &lt;strong&gt;functions&lt;/strong&gt; that implement or generate &lt;strong&gt;iterators&lt;/strong&gt;. Generators are functions which behave like iterators, but can have better performance characteristics. These include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating values on demand, resulting in lower memory consumption.&lt;/li&gt;
&lt;li&gt;The values returned are lazily generated. Hence, it is not necessary to wait until all the values in a list are generated before using them.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;However, the set of generated values can only be used once.&lt;/p&gt;
&lt;p&gt;Generators look like normal functions, but instead of the &lt;strong&gt;return statement&lt;/strong&gt; they make use of the &lt;strong&gt;yield statement&lt;/strong&gt;. The &lt;code&gt;yield&lt;/code&gt; statement tells the interpreter to store local variables and record the current position in the generator, so when another call is made to the generator, it will &lt;strong&gt;resume from that saved location&lt;/strong&gt; and with the previous values of local variables intact.&lt;/p&gt;
&lt;p&gt;Consider this generator:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def test_generator():
    yield 1
    yield 2
    yield 3

g = test_generator()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can now iterate over g using the next() function:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(next(g)) # 1
print(next(g)) # 2
print(next(g)) # 3
print(next(g)) # StopIteration error
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Using an interface as a parameter</title><link>https://rezhajul.io/posts/using-an-interface-as-a-parameter/</link><guid isPermaLink="true">https://rezhajul.io/posts/using-an-interface-as-a-parameter/</guid><description>Using an interface as a parameter</description><pubDate>Thu, 20 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;You can define methods that take an &lt;code&gt;interface&lt;/code&gt; as a parameter. Your &lt;code&gt;interface&lt;/code&gt; defines a contract and your methods will accept as parameter any objects whose class implements that &lt;code&gt;interface&lt;/code&gt;. This is in fact one of the most common and useful ways to use an &lt;code&gt;interface&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;interface Test {
  public void test(); //define the interface
}

class Tester {
  public void runTest(Test t) {
    t.test();
  } // method with interface as param
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;MyTest&lt;/code&gt; class will implement this interface:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;class MyTest implements Test {
  public void test() { // running code }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now the &lt;code&gt;runTest&lt;/code&gt; method will take as a parameter any object that implements the &lt;code&gt;Test&lt;/code&gt; Interface:&lt;/p&gt;
&lt;p&gt;Tester tester = new Tester();
Test test1 = new MyTest();
tester.runTest(test1);&lt;/p&gt;
&lt;p&gt;The collection framework from the standard Java API frequently uses this procedure. For example, &lt;code&gt;Collections.sort()&lt;/code&gt; can sort any class that implements the &lt;code&gt;List&lt;/code&gt; interface and whose contents implement the &lt;code&gt;Comparable&lt;/code&gt; interface.&lt;/p&gt;
</content:encoded></item><item><title>Using bounded type parameters in generic methods</title><link>https://rezhajul.io/posts/using-bounded-type-parameters-in-generic-methods/</link><guid isPermaLink="true">https://rezhajul.io/posts/using-bounded-type-parameters-in-generic-methods/</guid><description>Using bounded type parameters in generic methods</description><pubDate>Tue, 18 Apr 2017 23:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Sometimes it may be appropriate to write a generic method, however it will not be possible for it to accept every type while still maintaining all the necessary functionality.&lt;/p&gt;
&lt;p&gt;To solve this, use bounded type parameters to restrict generic methods from accepting arguments of a particular kind.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;public &amp;lt;T extends Shape&amp;gt;
  void drawAll(List&amp;lt;T&amp;gt; shapes){
    for (Shape s: shapes) {
        s.draw(this);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above method is used to draw a list of shapes. Writing a generic method with an unbounded type parameter would cause problems because lists of other types cannot be drawn in this way.&lt;/p&gt;
&lt;p&gt;By specifying that &lt;code&gt;&amp;lt;T extends Shape&amp;gt;&lt;/code&gt; we guarantee that only &lt;code&gt;Shape&lt;/code&gt; objects can be passed to the method.&lt;/p&gt;
</content:encoded></item><item><title>Using the Deprecated annotation</title><link>https://rezhajul.io/posts/using-deprecated-annotation/</link><guid isPermaLink="true">https://rezhajul.io/posts/using-deprecated-annotation/</guid><description>Using the Deprecated annotation</description><pubDate>Tue, 18 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;The &lt;code&gt;@Deprecated&lt;/code&gt; annotation can be used to indicate elements which should no longer be used. Any program that uses an element which is marked as &lt;code&gt;@Deprecated&lt;/code&gt; will produce a compiler warning.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;@Deprecated
public void oldMethod() {
  ...
}
public void newMethod() {
  ...
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example, &lt;code&gt;newMethod&lt;/code&gt; replaces &lt;code&gt;oldMethod&lt;/code&gt;. However we do not want to remove &lt;code&gt;oldMethod&lt;/code&gt; completely because keeping it will help maintain backwards compatibility with older versions of the program.&lt;/p&gt;
&lt;p&gt;We can keep &lt;code&gt;oldMethod()&lt;/code&gt; and indicate to programmers that it should not be used in new versions by applying the &lt;code&gt;@Deprecated&lt;/code&gt; annotation.&lt;/p&gt;
</content:encoded></item><item><title>Diamond Operator in Java</title><link>https://rezhajul.io/posts/diamond-operator-in-java/</link><guid isPermaLink="true">https://rezhajul.io/posts/diamond-operator-in-java/</guid><description>Diamond Operator in Java</description><pubDate>Mon, 17 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Since Java 7 it&amp;#39;s not necessary to declare the type parameter twice while instantiating objects like Maps, Sets and Lists.&lt;/p&gt;
&lt;p&gt;Consider the following code:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;Map&amp;lt;String, List&amp;lt;String&amp;gt;&amp;gt; phoneBook = new HashMap&amp;lt;String, List&amp;lt;String&amp;gt;&amp;gt;();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The type parameter for HashMap on the right-hand side of the expression seems redundant. This can be shortened using an empty &amp;quot;Diamond Operator&amp;quot; to give:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Map&amp;lt;String, List&amp;lt;String&amp;gt;&amp;gt; phoneBook = new HashMap&amp;lt;&amp;gt;();
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Lambda Functions in Python</title><link>https://rezhajul.io/posts/lambda-functions-in-python/</link><guid isPermaLink="true">https://rezhajul.io/posts/lambda-functions-in-python/</guid><description>Lambda Functions in Python</description><pubDate>Sun, 16 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;The &lt;code&gt;lambda&lt;/code&gt; keyword in Python provides a shortcut for declaring small anonymous functions. Lambda functions behave just like regular functions declared with the def keyword. They can be used whenever function objects are required.&lt;/p&gt;
&lt;p&gt;For example, this is how you’d define a simple lambda function carrying out an addition:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; add = lambda x, y: x + y
&amp;gt;&amp;gt;&amp;gt; add(5, 3)
8
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You could declare the same add function with the def keyword:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; def add(x, y):
...     return x + y
&amp;gt;&amp;gt;&amp;gt; add(5, 3)
8
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you might be wondering: Why the big fuss about lambdas? If they’re just a slightly more terse version of declaring functions with def, what’s the big deal?&lt;/p&gt;
&lt;p&gt;Take a look at the following example and keep the words function expression in your head while you do that:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; (lambda x, y: x + y)(5, 3)
8
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Okay, what happened here? I just used lambda to define an “add” function inline and then immediately called it with the arguments 5 and 3.&lt;/p&gt;
&lt;p&gt;Conceptually the lambda expression lambda x, y: x + y is the same as declaring a function with def, just written inline. The difference is I didn’t bind it to a name like add before I used it. I simply stated the expression I wanted to compute and then immediately evaluated it by calling it like a regular function.&lt;/p&gt;
&lt;p&gt;Before you move on, you might want to play with the previous code example a little to really let the meaning of it sink in. I still remember this took me a while to wrap my head around. So don’t worry about spending a few minutes in an interpreter session.&lt;/p&gt;
&lt;p&gt;There’s another syntactic difference between lambdas and regular function definitions: Lambda functions are restricted to a single expression. This means a lambda function can’t use statements or annotations—not even a return statement.&lt;/p&gt;
&lt;p&gt;How do you return values from lambdas then? Executing a lambda function evaluates its expression and then automatically returns its result. So there’s always an implicit return statement. That’s why some people refer to lambdas as single expression functions.
Lambdas You Can Use&lt;/p&gt;
&lt;p&gt;When should you use lambda functions in your code? Technically, any time you’re expected to supply a function object you can use a lambda expression. And because a lambda expression can be anonymous, you don’t even need to assign it to a name.&lt;/p&gt;
&lt;p&gt;This can provide a handy and “unbureaucratic” shortcut to defining a function in Python. My most frequent use case for lambdas is writing short and concise key funcs for sorting iterables by an alternate key:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; sorted(range(-5, 6), key=lambda x: x ** 2)
[0, -1, 1, -2, 2, -3, 3, -4, 4, -5, 5]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Like regular nested functions, lambdas also work as lexical closures.&lt;/p&gt;
&lt;p&gt;What’s a lexical closure? Just a fancy name for a function that remembers the values from the enclosing lexical scope even when the program flow is no longer in that scope. Here’s a (fairly academic) example to illustrate the idea:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; def make_adder(n):
...     return lambda x: x + n

&amp;gt;&amp;gt;&amp;gt; plus_3 = make_adder(3)
&amp;gt;&amp;gt;&amp;gt; plus_5 = make_adder(5)

&amp;gt;&amp;gt;&amp;gt; plus_3(4)
7
&amp;gt;&amp;gt;&amp;gt; plus_5(4)
9
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the above example the x + n lambda can still access the value of n even though it was defined in the make_adder function (the enclosing scope).&lt;/p&gt;
&lt;p&gt;Sometimes, using a lambda function instead of a nested function declared with def can express one’s intent more clearly. But to be honest this isn’t a common occurrence—at least in the kind of code that I like to write.
But Maybe You Shouldn’t…&lt;/p&gt;
&lt;p&gt;Now on the one hand I’m hoping this article got you interested in exploring Python’s lambda functions. On the other hand I feel like it’s time to put up another caveat: Lambda functions should be used sparingly and with extraordinary care.&lt;/p&gt;
&lt;p&gt;I know I wrote my fair share of code using lambdas that looked “cool” but was actually a liability for me and my coworkers. If you’re tempted to use a lambda, spend a few seconds (or minutes) to think if this is really the cleanest and most maintainable way to achieve the desired result.&lt;/p&gt;
&lt;p&gt;For example, doing something like this to save two lines of code is just silly. Sure, it technically works and it’s a nice enough “trick”. But it’s also going to confuse the next gal or guy having to ship a bugfix under a tight deadline:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Harmful:
&amp;gt;&amp;gt;&amp;gt; class Car:
...     rev = lambda self: print(&amp;#39;Wroom!&amp;#39;)
...     crash = lambda self: print(&amp;#39;Boom!&amp;#39;)

&amp;gt;&amp;gt;&amp;gt; my_car = Car()
&amp;gt;&amp;gt;&amp;gt; my_car.crash()
&amp;#39;Boom!&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I feel similarly about complicated map() or filter() constructs using lambdas. Usually it’s much cleaner to go with a list comprehension or generator expression:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Harmful:
&amp;gt;&amp;gt;&amp;gt; list(filter(lambda x: x % 2 == 0, range(16)))
[0, 2, 4, 6, 8, 10, 12, 14]

# Better:
&amp;gt;&amp;gt;&amp;gt; [x for x in range(16) if x % 2 == 0]
[0, 2, 4, 6, 8, 10, 12, 14]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you find yourself doing anything remotely complex with a lambda expression, consider defining a real function with a proper name instead.&lt;/p&gt;
&lt;p&gt;Saving a few keystrokes won’t matter in the long run. Your colleagues (and your future self) will appreciate clean and readable code more than terse wizardry.
Things to Remember&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Lambda functions are single-expression functions that are not necessarily bound to a name (anonymous).&lt;/li&gt;
&lt;li&gt;Lambda functions can’t use regular Python statements and always include an implicit return statement.&lt;/li&gt;
&lt;li&gt;Always ask yourself: Would using a regular (named) function or a list/generator expression offer more clarity?&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>Altering format string output by changing a format specifier&apos;s argument_index</title><link>https://rezhajul.io/posts/altering-format-string-output/</link><guid isPermaLink="true">https://rezhajul.io/posts/altering-format-string-output/</guid><description>Altering format string output by changing a format specifier&apos;s argument_index</description><pubDate>Sat, 15 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;A format string is a string which can include one or more format specifiers.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;String hungry = &amp;quot;hungry&amp;quot;;
String hippo = &amp;quot;hippo&amp;quot;;
String s = String.format(
  &amp;quot;%s %s&amp;quot;,
  hungry,
  hippo);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, &amp;quot;&lt;code&gt;%s %s&lt;/code&gt;&amp;quot; is a format string, and &lt;code&gt;%s&lt;/code&gt; is a format specifier. The value of s is &amp;quot;hungry hippo&amp;quot;.&lt;/p&gt;
&lt;p&gt;Modify the order that the arguments appear in the format string by specifying an argument index in the format specifiers.&lt;/p&gt;
&lt;p&gt;Argument indexes take the form of a non-negative integer followed by &lt;code&gt;$&lt;/code&gt;, where the integer specifies the position of the argument in the argument list.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;String s = String.format(
  &amp;quot;%2$s %1$s&amp;quot;,
  hungry,
  hippo);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output of the example above will be &amp;quot;hippo hungry&amp;quot; because we have specified that argument 2 (%2$s) will come before argument 1 (%1$s).&lt;/p&gt;
</content:encoded></item><item><title>Add Autocorrect to Git</title><link>https://rezhajul.io/posts/add-autocorrect-to-git/</link><guid isPermaLink="true">https://rezhajul.io/posts/add-autocorrect-to-git/</guid><description>Add Autocorrect to Git</description><pubDate>Fri, 14 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;If you want git to correct typos you can set help.autocorrect:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git config --global help.autocorrect 30
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You set help.autocorrect to an integer representing the time you have to change your mind before git executes the command (1 = 0.1 seconds).&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git comit
WARNING: You called a git command
named &amp;#39;comit&amp;#39;, which does not exist.
Continuing under the assumption that
you meant &amp;#39;commit&amp;#39;
in 3 seconds automatically...
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Future of Interface in Java 9</title><link>https://rezhajul.io/posts/future-of-interface-in-java-9/</link><guid isPermaLink="true">https://rezhajul.io/posts/future-of-interface-in-java-9/</guid><description>Interface in Java 9</description><pubDate>Thu, 13 Apr 2017 16:33:17 GMT</pubDate><content:encoded>&lt;p&gt;&lt;code&gt;Interface&lt;/code&gt; is a very popular way to expose our APIs. Until Java 7 it was getting used as a contract (abstract method) where a child class was obliged to implement the contract; but after Java 8, Interface got more power — to where we can have static and default methods.&lt;/p&gt;
&lt;p&gt;In Java 9, Interface is getting more power, to the point where we can define private methods as well. Let us understand why we need private methods in Interfaces.&lt;/p&gt;
&lt;p&gt;Suppose we are defining a ReportGenerator Interface in Java8 as below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;package id.rezhajulio.test.interfacejava8;

public interface ReportGeneratorJava8 {

    /**

     * Need to get implemented as per ReportGenerator class

     * @param reportData

     * @param schema

     */

    void generateReport(String reportData, String schema);

    /**

     * Get the ready data

     * @param reportSource

     * @return

     * @throws Exception

     */

    default String getReportData(String reportSource) throws Exception {

        String reportData = null;

        if (null == reportSource) {

            throw new Exception(&amp;quot;reportSource can&amp;#39;t be null....&amp;quot;);

        }

        if (reportSource.equalsIgnoreCase(&amp;quot;DB&amp;quot;)) {

            System.out.println(&amp;quot;Reading the data from DB ....&amp;quot;);

            //logic to get the data from DB

            reportData = &amp;quot;data from DB&amp;quot;;

        } else if (reportSource.equalsIgnoreCase(&amp;quot;File&amp;quot;)) {

            System.out.println(&amp;quot;Reading the data from FileSystem ....&amp;quot;);

            //logic to get the data from File

            reportData = &amp;quot;data from File&amp;quot;;

        } else if (reportSource.equalsIgnoreCase(&amp;quot;Cache&amp;quot;)) {

            System.out.println(&amp;quot;Reading the data from Cache ....&amp;quot;);

            //logic to get the data from Cache

            reportData = &amp;quot;data from Cache&amp;quot;;

        }

        System.out.println(&amp;quot;Formatting the data to create a common standard&amp;quot;);

        /** Format the data and then return **/

        //logic to format the data

        return reportData;

    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And the implementation class could be HtmlReportGeneratorJava8:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;package id.rezhajulio.test.interfacejava8;

public class HtmlReportGeneratorJava8 implements ReportGeneratorJava8 {

    @Override

    public void generateReport(String reportData, String schema) {

        //HTML Specific Implementation according to given schema

    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Method &amp;#39;&lt;code&gt;getReportData&lt;/code&gt;&amp;#39; looks pretty messy, as there is a lot of logic that could be kept in a separate method and that can be called in &amp;#39;&lt;code&gt;getReportData&lt;/code&gt;&amp;#39;. To achieve that, we need a private method, as we don&amp;#39;t want to expose these methods to the outside world.&lt;/p&gt;
&lt;p&gt;Another thing, ReportGeneratorJava8 interface is formatting the data after getting it from the source. So we can have a common method named &amp;#39;&lt;code&gt;formatData&lt;/code&gt;&amp;#39; defined as private in the interface. So the interface could be rewritten as below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;package id.rezhajulio.test.interfacejava9;

public interface ReportGeneratorJava9 {

    /**

     * Need to get implemented as per ReportGenerator class

     * @param reportData

     * @param schema

     */

    void generateReport(String reportData, String schema);

    /**

     * Reading the report data from DB

     * @return

     */

    private String getReportDataFromDB() {

        System.out.println(&amp;quot;Reading the data from DB ....&amp;quot;);

        //logic to get the data from DB

        String reportData = &amp;quot;data from DB&amp;quot;;

        return formatData(reportData);

    }

    /**

     * Reading the report data from FileSystem

     * @return

     */

    private String getReportDataFromFile() {

        System.out.println(&amp;quot;Reading the data from FileSystem ....&amp;quot;);

        //logic to get the data from File

        String reportData = &amp;quot;data from File&amp;quot;;

        return formatData(reportData);

    }

    /**

     * Reading the report data from cache

     * @return

     */

    private String getReportDataFromCache() {

        System.out.println(&amp;quot;Reading the data from Cache ....&amp;quot;);

        //logic to get the data from Cache

        String reportData = &amp;quot;data from Cache&amp;quot;;

        return formatData(reportData);

    }

    /**

     * Formatting the data to create a common standardized data,

     * as it&amp;#39;s coming from different systems

     * @param reportData

     * @return

     */

    private String formatData(String reportData) {

        System.out.println(&amp;quot;Formatting the data to create a common standard&amp;quot;);

        /** Format the data and then return **/

        //logic to format the data

        return reportData;

    }

    /**

     * Get the ready data

     * @param reportSource

     * @return

     * @throws Exception

     */

    default String getReportData(String reportSource) throws Exception {

        String reportData = null;

        if (null == reportSource) {

            throw new Exception(&amp;quot;reportSource can&amp;#39;t be null....&amp;quot;);

        }

        if (reportSource.equalsIgnoreCase(&amp;quot;DB&amp;quot;)) {

            reportData = getReportDataFromDB();

        } else if (reportSource.equalsIgnoreCase(&amp;quot;File&amp;quot;)) {

            reportData = getReportDataFromFile();

        } else if (reportSource.equalsIgnoreCase(&amp;quot;Cache&amp;quot;)) {

            reportData = getReportDataFromCache();

        }

        return reportData;

    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now the above implementation looks pretty clean, and we&amp;#39;ve seen the need of private methods in Interface.
Summary of Interface Enhancements&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Constants (until Java 1.7)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Method signatures (until Java 1.7)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nested types (until Java 1.7)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Default methods (since 1.8)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Static methods (since 1.8)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Private methods (since 1.9)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Private static methods (since 1.9)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Enjoy the power of Java 9&amp;#39;s Interface.&lt;/p&gt;
</content:encoded></item><item><title>More about Interface in Java 8</title><link>https://rezhajul.io/posts/more-about-interface-in-java-8/</link><guid isPermaLink="true">https://rezhajul.io/posts/more-about-interface-in-java-8/</guid><description>Interface in Java 8</description><pubDate>Tue, 11 Apr 2017 17:33:17 GMT</pubDate><content:encoded>&lt;p&gt;&lt;code&gt;Interface&lt;/code&gt; was meant to define a contract before Java 8, where we were able to define the methods a class needed to implement if binding itself with the interface. Interface was only involved with abstract methods and constants.&lt;/p&gt;
&lt;p&gt;But in Java 8, Interface has become much more, and now it can have methods defined using static or default. Remember that default methods can be overridden, while static methods cannot.&lt;/p&gt;
&lt;h2&gt;Method Definitions&lt;/h2&gt;
&lt;p&gt;Interface was meant for good designers because when we create an Interface, we should know the possible contract that every class should implement. Once your contract definition is done and classes have implemented the contract, it&amp;#39;s difficult to change the definition of an Interface, as it will break the implemented classes.&lt;/p&gt;
&lt;p&gt;A good designer always creates an interface and provides a base class, which provides the default definition of the Interface methods, and classes should extend the base class in place of implementing the interface directly. In this way, any future change in Interface could be taken care of by the base class. Other implementations of sub-classes would be fine.&lt;/p&gt;
&lt;p&gt;In Java 8, they tried to fix this issue by providing method definitions — using static or default. We can add the definitions using static or default in Interface without breaking the existing classes, making it easier to design interfaces in Java 8.&lt;/p&gt;
&lt;h2&gt;Do We Still Need Abstract Classes?&lt;/h2&gt;
&lt;p&gt;After the Java 8 interface enhancements, the new version of Interface looks like a great replacement for the Abstract class, right? No, not at all. There are still the differences between these two. Do you remember &amp;#39;access specifiers&amp;#39; (public, protected, etc.)?&lt;/p&gt;
&lt;p&gt;So the Abstract class can have any of these access specifiers for their methods and variables while an Interface is always public and variables in an Interface are always constants. So we need to be very wise when choosing between the Interface and Abstract classes, and I believe that some intelligence is still required.&lt;/p&gt;
&lt;h2&gt;Examples of Interface Enhancements&lt;/h2&gt;
&lt;p&gt;We can define an Interface as having static and default methods as below:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Java8Interface.java:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;package id.rezhajulio.interface.enhancement;

public interface Java8Interface {

    //defaults method - by default it&amp;#39;s public

    default void hi() {

        System.out.println(&amp;quot;In Java8Interface: new feature of Java8 is saying Hi....&amp;quot;);

    }

    //static method - by default it&amp;#39;s public

    static void hello() {

        System.out.println(&amp;quot;In Java8Interface: new feature of Java8 is saying Hello....&amp;quot;);

    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The concrete class that is implementing the above Interface can be used as below.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ConcreteClass.java:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;package package id.rezhajulio.interface.enhancement;

public class ConcreteClass implements Java8Interface {

    public static void main(String[] args) {

        ConcreteClass c = new ConcreteClass();

        c.hi(); // would be accessible using class instance

        //Not possible to call using class instance

        Java8Interface.hello();

    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Which will lead to this output:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;In Java8Interface: new feature of Java8 is saying Hi....

In Java8Interface: new feature of Java8 is saying Hello....
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But suppose you have two interfaces with the same method signature as &amp;#39;default&amp;#39;. Then, your class should define the implementation for that method. Otherwise, you&amp;#39;ll see a compile time error.&lt;/p&gt;
&lt;p&gt;Let&amp;#39;s see how that plays out below. The following Interface has the same methods as our Java8Interface.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Java8Interface_1.java:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;package id.rezhajulio.interface.enhancement;

public interface Java8Interface_1 {

    default void hi() {

        System.out.println(&amp;quot;In Java8Interface_1: new feature of Java8 is saying Hi....&amp;quot;);

    }

    static void hello() {

        System.out.println(&amp;quot;In Java8Interface_1: new feature of Java8 is saying Hello....&amp;quot;);

    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Meanwhile, our concrete class handles the implementation for our default method as follows:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ConcreteClass.java:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;package id.rezhajulio.interface.enhancement;

public class ConcreteClass implements Java8Interface, Java8Interface_1 {

    public static void main(String[] args) {

        ConcreteClass c = new ConcreteClass();

        c.hi();

        Java8Interface.hello();

    }

    //We need to override hi method otherwise compilation error

    @Override

    public void hi() {

        Java8Interface.super.hi();

    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And our output will be as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;In Java8Interface: new feature of Java8 is saying Hi....

In Java8Interface: new feature of Java8 is saying Hello....
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, say a List API has the same default implementation as below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;default void sort(Comparator c))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That helps call the sort methods on our List, and the Collections API is no longer needed.&lt;/p&gt;
&lt;p&gt;And let&amp;#39;s see the snippet for calling the sort method using a List API:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;List list = new ArrayList&amp;lt;&amp;gt;(); list.add(&amp;quot;v&amp;quot;);

list.add(&amp;quot;a&amp;quot;);

list.add(&amp;quot;z&amp;quot;); list.add(&amp;quot;d&amp;quot;);

list.sort((val1, val2) -&amp;gt; val1.compareTo(val2));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Enjoy the power of Interface!&lt;/p&gt;
</content:encoded></item><item><title>Spring Boot in a Single File</title><link>https://rezhajul.io/posts/spring-boot-in-a-single-file/</link><guid isPermaLink="true">https://rezhajul.io/posts/spring-boot-in-a-single-file/</guid><description>Spring Boot in a Single File</description><pubDate>Mon, 10 Apr 2017 17:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Over the years spring has become more and more complex as new functionalities have been added. Just visit the page-&lt;a href=&quot;https://spring.io/projects&quot;&gt;https://spring.io/projects&lt;/a&gt; and we will see all the spring projects we can use in our application for different functionalities. If one has to start a new spring project we have to add build path or add maven dependencies, configure application server, add spring configuration. So a lot of effort is required to start a new spring project as we have to currently do everything from scratch. Spring Boot is the solution to this problem. Spring boot has been built on top of existing spring framework including Spring MVC. Using spring boot we avoid all the boilerplate code and configurations that we had to do previously. Spring boot thus helps us use the existing Spring functionalities more robustly and with minimum efforts.&lt;/p&gt;
&lt;p&gt;Features of Spring boot:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Auto-Configuration - No need to manually configure dispatcher servlet, static resource mappings, property source loader, message converters etc.&lt;/li&gt;
&lt;li&gt;Dependency Management - The different versions of commonly used libraries are pre-selected and grouped in different starter POMs that we can include in your project. By selecting one Spring Boot version we are implicitly selecting dozens of dependencies that we would have to otherwise select and harmonize ourselves. Example-&lt;/li&gt;
&lt;li&gt;Advanced Externalized Configuration - There is a large list of bean properties that can be configured through application.properties file without touching java or xml config.&lt;/li&gt;
&lt;li&gt;Production support- We get health checking, application and jvm metrics, jmx via http and a few more things for free.&lt;/li&gt;
&lt;li&gt;Runnable Jars - We can package your application as a runnable jar with embedded tomcat included so it presents a self-contained deployment unit&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;First you need to install spring-boot-cli. You can use SDKMAN to install latest spring-boot-cli and also get updates in the future. I&amp;#39;ve covered that &lt;a href=&quot;https://rezhajulio.id/manage-your-jvm-environment-with-sdkman/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Spring Boot also has Groovy support, allowing us to build Spring MVC web apps with as little as a single file.
Create a new file called app.groovy and put the following code in it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-groovy&quot;&gt;@RestController
class ThisWillActuallyRun {

    @RequestMapping(&amp;quot;/&amp;quot;)
    String home() {
        return &amp;quot;Hello World from Spring Boot!&amp;quot;
    }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run it as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ spring run app.groovy
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You will see its log on your terminal&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  .   ____          _            __ _ _
 /\\ / ___&amp;#39;_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | &amp;#39;_ | &amp;#39;_| | &amp;#39;_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  &amp;#39;  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v1.5.2.RELEASE)

2017-04-09 19:36:30.777  INFO 25470 --- [       runner-0] o.s.boot.SpringApplication               : Starting application on kimiamania with PID 25470 (started by rezha in /home/rezha)
2017-04-09 19:36:30.779  INFO 25470 --- [       runner-0] o.s.boot.SpringApplication               : No active profile set, falling back to default profiles: default
2017-04-09 19:36:31.012  INFO 25470 --- [       runner-0] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@33b6649c: startup date [Sun Apr 09 19:36:31 WIB 2017]; root of context hierarchy
2017-04-09 19:36:32.069  INFO 25470 --- [       runner-0] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8080 (http)
2017-04-09 19:36:32.079  INFO 25470 --- [       runner-0] o.apache.catalina.core.StandardService   : Starting service Tomcat
2017-04-09 19:36:32.080  INFO 25470 --- [       runner-0] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/8.5.11
2017-04-09 19:36:32.119  INFO 25470 --- [ost-startStop-1] org.apache.catalina.loader.WebappLoader  : Unknown loader org.springframework.boot.cli.compiler.ExtendedGroovyClassLoader$DefaultScopeParentClassLoader@517242a2 class org.springframework.boot.cli.compiler.ExtendedGroovyClassLoader$DefaultScopeParentClassLoader
2017-04-09 19:36:32.130  INFO 25470 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2017-04-09 19:36:32.130  INFO 25470 --- [ost-startStop-1] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 1118 ms
2017-04-09 19:36:32.207  INFO 25470 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean  : Mapping servlet: &amp;#39;dispatcherServlet&amp;#39; to [/]
2017-04-09 19:36:32.210  INFO 25470 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: &amp;#39;characterEncodingFilter&amp;#39; to: [/*]
2017-04-09 19:36:32.211  INFO 25470 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: &amp;#39;hiddenHttpMethodFilter&amp;#39; to: [/*]
2017-04-09 19:36:32.211  INFO 25470 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: &amp;#39;httpPutFormContentFilter&amp;#39; to: [/*]
2017-04-09 19:36:32.211  INFO 25470 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: &amp;#39;requestContextFilter&amp;#39; to: [/*]
2017-04-09 19:36:32.377  INFO 25470 --- [       runner-0] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@33b6649c: startup date [Sun Apr 09 19:36:31 WIB 2017]; root of context hierarchy
2017-04-09 19:36:32.418  INFO 25470 --- [       runner-0] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped &amp;quot;{[/]}&amp;quot; onto public java.lang.String ThisWillActuallyRun.home()
2017-04-09 19:36:32.420  INFO 25470 --- [       runner-0] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped &amp;quot;{[/error],produces=[text/html]}&amp;quot; onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2017-04-09 19:36:32.420  INFO 25470 --- [       runner-0] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped &amp;quot;{[/error]}&amp;quot; onto public org.springframework.http.ResponseEntity&amp;lt;java.util.Map&amp;lt;java.lang.String, java.lang.Object&amp;gt;&amp;gt; org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2017-04-09 19:36:32.441  INFO 25470 --- [       runner-0] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2017-04-09 19:36:32.441  INFO 25470 --- [       runner-0] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2017-04-09 19:36:32.468  INFO 25470 --- [       runner-0] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2017-04-09 19:36:32.760  INFO 25470 --- [       runner-0] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup
2017-04-09 19:36:32.801  INFO 25470 --- [       runner-0] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2017-04-09 19:36:32.805  INFO 25470 --- [       runner-0] o.s.boot.SpringApplication               : Started application in 2.342 seconds (JVM running for 4.544)
2017-04-09 19:36:52.243  INFO 25470 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring FrameworkServlet &amp;#39;dispatcherServlet&amp;#39;
2017-04-09 19:36:52.243  INFO 25470 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : FrameworkServlet &amp;#39;dispatcherServlet&amp;#39;: initialization started
2017-04-09 19:36:52.254  INFO 25470 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : FrameworkServlet &amp;#39;dispatcherServlet&amp;#39;: initialization completed in 11 ms
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, you can open your browser or curl to localhost:8080. Spring Boot does this by dynamically adding key annotations to your code and using Groovy Grape to pull down libraries needed to make the app run.&lt;/p&gt;
</content:encoded></item><item><title>Manage Your JVM Environment with SDKMAN</title><link>https://rezhajul.io/posts/manage-your-jvm-environment-with-sdkman/</link><guid isPermaLink="true">https://rezhajul.io/posts/manage-your-jvm-environment-with-sdkman/</guid><description>SDKMAN or SDK Manager for JVM</description><pubDate>Sun, 09 Apr 2017 17:33:17 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;SDKMAN!&lt;/strong&gt; is a tool for managing parallel versions of multiple Software Development Kits on most Unix based systems. It provides a convenient Command Line Interface (CLI) and API for installing, switching, removing and listing Candidates. Formerly known as GVM the Groovy enVironment Manager, it was inspired by the very useful RVM and rbenv tools, used at large by the Ruby community.&lt;/p&gt;
&lt;p&gt;Install Software Development Kits for the JVM such as Java, Groovy, Scala, Kotlin and Ceylon. Activator, Ant, Gradle, Grails, Maven, SBT, Spring Boot, Vert.x and many others also supported.&lt;/p&gt;
&lt;h2&gt;Installing&lt;/h2&gt;
&lt;p&gt;Installing SDKMAN! on UNIX-like platforms is as easy as ever. SDKMAN! installs smoothly on Mac OSX, Linux, Cygwin, Solaris and FreeBSD. We also support Bash and ZSH shells.
Simply open a new terminal and enter:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ curl -s &amp;quot;https://get.sdkman.io&amp;quot; | bash
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Follow the instructions on-screen to complete installation.
Next, open a new terminal or enter:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ source &amp;quot;$HOME/.sdkman/bin/sdkman-init.sh&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Lastly, run the following code snippet to ensure that installation succeeded:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ sdk version
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If all went well, the version should be displayed. Something like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  sdkman 5.0.0+51
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Beta Channel&lt;/h2&gt;
&lt;p&gt;For the more adventurous among us, they have a beta channel. All new CLI features will first be rolled out to this group of users for trial purposes. Beta versions can be considered stable for the most part, but might occasionally break. To join the beta program, simply update the &lt;code&gt;~/.sdkman/etc/config&lt;/code&gt; file as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sdkman_beta_channel=true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, open a new terminal and perform a forced update with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sdk selfupdate force
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To leave the beta channel, simply set the above config back to false and follow the same procedure.&lt;/p&gt;
&lt;h2&gt;Installing an SDK&lt;/h2&gt;
&lt;p&gt;Latest Stable&lt;/p&gt;
&lt;p&gt;Install the latest stable version of your SDK of choice (say, Java JDK) by running the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ sdk install java
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You will see something like the following output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;Downloading: java 8u111

In progress...

######################################################################## 100.0%

Installing: java 8u111
Done installing!
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you will be prompted if you want this version to be set as default.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Do you want java 8u111 to be set as default? (Y/n):
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Answering yes (or hitting enter) will ensure that all subsequent shells opened will have this version of the SDK in use by default.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Setting java 8u111 as default.
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Specific Version&lt;/h2&gt;
&lt;p&gt;Need a specific version of an SDK? Simply qualify the version you require:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ sdk install springboot 1.5.2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All subsequent steps same as above.&lt;/p&gt;
</content:encoded></item><item><title>Updating interfaces by using default methods</title><link>https://rezhajul.io/posts/updating-interfaces-by-using-default-method/</link><guid isPermaLink="true">https://rezhajul.io/posts/updating-interfaces-by-using-default-method/</guid><description>Updating interfaces by using default methods</description><pubDate>Sun, 09 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Take the following interface:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;public interface Cooking {
  public void fry();
  public void boil();
  public void chop();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To add new functionality, simply adding a new method to &lt;code&gt;Cooking&lt;/code&gt; called &lt;code&gt;microwave()&lt;/code&gt; will cause problems. Any class that previously implemented &lt;code&gt;Cooking&lt;/code&gt; will now have to be updated in order to function again.&lt;/p&gt;
&lt;p&gt;To avoid this, give &lt;code&gt;microwave()&lt;/code&gt; a default implementation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;public interface Cooking {
  public void fry();
  public void boil();
  public void chop();
  default void microwave() {
    //some code implementing microwave
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As &lt;code&gt;microwave()&lt;/code&gt; already has a &lt;code&gt;default&lt;/code&gt; implementation defined in the &lt;code&gt;Cooking&lt;/code&gt; interface definition, classes that implement it now don&amp;#39;t need to implement &lt;code&gt;microwave()&lt;/code&gt; in order to work.&lt;/p&gt;
&lt;p&gt;This allows us to add functionality without breaking old code.&lt;/p&gt;
&lt;p&gt;Note: This has been possible since Java 8.&lt;/p&gt;
</content:encoded></item><item><title>Converting Stacktrace to String</title><link>https://rezhajul.io/posts/converting-stacktrace-to-string/</link><guid isPermaLink="true">https://rezhajul.io/posts/converting-stacktrace-to-string/</guid><description>Converting Stacktrace to String</description><pubDate>Sat, 08 Apr 2017 05:33:17 GMT</pubDate><content:encoded>&lt;p&gt;To store a stack trace as a string, you can use &lt;code&gt;Throwable.printStackTrace(...)&lt;/code&gt;
For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;public static String getStackTrace(
  Throwable throwable
){
  Writer result = new StringWriter();
  PrintWriter printWriter =
    new PrintWriter(result);
  throwable.printStackTrace(printWriter);
  return result.toString();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the above example, &lt;code&gt;getStackTrace&lt;/code&gt; takes a &lt;code&gt;Throwable&lt;/code&gt; as a parameter and uses &lt;code&gt;printStackTrace&lt;/code&gt; to print it to a &lt;code&gt;PrintWriter&lt;/code&gt; output stream. This output is collected by the &lt;code&gt;StringWriter&lt;/code&gt; and converted to a string using &lt;code&gt;StringWriter.toString()&lt;/code&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Synchronized Statement in Java</title><link>https://rezhajul.io/posts/synchronize-statement-in-java/</link><guid isPermaLink="true">https://rezhajul.io/posts/synchronize-statement-in-java/</guid><description>Using synchronized statements</description><pubDate>Fri, 07 Apr 2017 15:33:17 GMT</pubDate><content:encoded>&lt;p&gt;&lt;code&gt;synchronized&lt;/code&gt; statements can be used to avoid memory inconsistency errors and thread interference in multi-threaded programs.&lt;/p&gt;
&lt;p&gt;When a thread executes code within a synchronized statement, the object passed as a parameter is locked. When writing a synchronized block, the object providing the lock must be specified after the synchronized keyword:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;public class Rezha {
  private int sum;
  public void addToSum(int value) {
    synchronized(this) {
      sum += value;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example, the object providing the lock is &lt;code&gt;this&lt;/code&gt;, which is the instance of Rezha that the method is being called in.&lt;/p&gt;
&lt;p&gt;You can lock instances of other classes as well:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;public class Rezha {
  private MyObject mo;

  public void myMethod(){
    synchronized(mo){
      //code here
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Function in Python are First-Class Object</title><link>https://rezhajul.io/posts/function-in-python-are-first-class-object/</link><guid isPermaLink="true">https://rezhajul.io/posts/function-in-python-are-first-class-object/</guid><description>Python’s functions are first-class objects</description><pubDate>Thu, 06 Apr 2017 14:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Python’s functions are first-class objects. You can assign them to variables, store them in data structures, pass them as arguments to other functions, and even return them as values from other functions.&lt;/p&gt;
&lt;p&gt;Grokking these concepts intuitively will make understanding advanced features in Python like lambdas and decorators (I will cover these two in the next post) much easier. It also puts you on a path towards functional programming techniques.&lt;/p&gt;
&lt;p&gt;In this post I’ll guide you through a number of examples to help you develop this intuitive understanding. The examples will build on top of one another, so you might want to read them in sequence and even to try out some of them in a Python interpreter session as you go along.&lt;/p&gt;
&lt;p&gt;Wrapping your head around the concepts we’ll be discussing here might take a little longer than expected. Don’t worry—that’s completely normal. I’ve been there. You might feel like you’re banging your head against the wall, and then suddenly things will “click” and fall into place when you’re ready.&lt;/p&gt;
&lt;p&gt;Throughout this post I’ll be using this yell function for demonstration purposes. It’s a simple toy example with easily recognizable output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def yell(text):
    return text.upper() + &amp;#39;!&amp;#39;

&amp;gt;&amp;gt;&amp;gt; yell(&amp;#39;hello&amp;#39;)
&amp;#39;HELLO!&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Functions Are Objects&lt;/h2&gt;
&lt;p&gt;All data in a Python program is represented by objects or relations between objects. Things like strings, lists, modules, and functions are all objects. There’s nothing particularly special about functions in Python.&lt;/p&gt;
&lt;p&gt;Because the yell function is an object in Python you can assign it to another variable, just like any other object:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; bark = yell
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This line doesn’t call the function. It takes the function object referenced by yell and creates a second name pointing to it, bark. You could now also execute the same underlying function object by calling bark:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; bark(&amp;#39;woof&amp;#39;)
&amp;#39;WOOF!&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Function objects and their names are two separate concerns. Here’s more proof: You can delete the function’s original name (yell). Because another name (bark) still points to the underlying function you can still call the function through it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; del yell

&amp;gt;&amp;gt;&amp;gt; yell(&amp;#39;hello?&amp;#39;)
NameError: &amp;quot;name &amp;#39;yell&amp;#39; is not defined&amp;quot;

&amp;gt;&amp;gt;&amp;gt; bark(&amp;#39;hey&amp;#39;)
&amp;#39;HEY!&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By the way, Python attaches a string identifier to every function at creation time for debugging purposes. You can access this internal identifier with the &lt;code&gt;__name__&lt;/code&gt; attribute:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; bark.__name__
&amp;#39;yell&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;While the function’s &lt;code&gt;__name__&lt;/code&gt; is still “yell” that won’t affect how you can access it from your code. This identifier is merely a debugging aid. A variable pointing to a function and the function itself are two separate concerns.&lt;/p&gt;
&lt;p&gt;(Since Python 3.3 there’s also &lt;code&gt;__qualname__&lt;/code&gt; which serves a similar purpose and provides a qualified name string to disambiguate function and class names.)
Functions Can Be Stored In Data Structures&lt;/p&gt;
&lt;p&gt;As functions are first-class citizens you can store them in data structures, just like you can with other objects. For example, you can add functions to a list:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; funcs = [bark, str.lower, str.capitalize]
&amp;gt;&amp;gt;&amp;gt; funcs
[&amp;lt;function yell at 0x10ff96510&amp;gt;,
 &amp;lt;method &amp;#39;lower&amp;#39; of &amp;#39;str&amp;#39; objects&amp;gt;,
 &amp;lt;method &amp;#39;capitalize&amp;#39; of &amp;#39;str&amp;#39; objects&amp;gt;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Accessing the function objects stored inside the list works like it would with any other type of object:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; for f in funcs:
...     print(f, f(&amp;#39;hey there&amp;#39;))
&amp;lt;function yell at 0x10ff96510&amp;gt; &amp;#39;HEY THERE!&amp;#39;
&amp;lt;method &amp;#39;lower&amp;#39; of &amp;#39;str&amp;#39; objects&amp;gt; &amp;#39;hey there&amp;#39;
&amp;lt;method &amp;#39;capitalize&amp;#39; of &amp;#39;str&amp;#39; objects&amp;gt; &amp;#39;Hey there&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can even call a function object stored in the list without assigning it to a variable first. You can do the lookup and then immediately call the resulting “disembodied” function object within a single expression:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; funcs[0](&amp;#39;heyho&amp;#39;)
&amp;#39;HEYHO!&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Functions Can Be Passed To Other Functions&lt;/h2&gt;
&lt;p&gt;Because functions are objects you can pass them as arguments to other functions. Here’s a greet function that formats a greeting string using the function object passed to it and then prints it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def greet(func):
    greeting = func(&amp;#39;Hi, I am a Python program&amp;#39;)
    print(greeting)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can influence the resulting greeting by passing in different functions. Here’s what happens if you pass the yell function to greet:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; greet(yell)
&amp;#39;HI, I AM A PYTHON PROGRAM!&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Of course you could also define a new function to generate a different flavor of greeting. For example, the following whisper function might work better if you don’t want your Python programs to sound like Optimus Prime:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def whisper(text):
    return text.lower() + &amp;#39;...&amp;#39;

&amp;gt;&amp;gt;&amp;gt; greet(whisper)
&amp;#39;hi, i am a python program...&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The ability to pass function objects as arguments to other functions is powerful. It allows you to abstract away and pass around behavior in your programs. In this example, the greet function stays the same but you can influence its output by passing in different greeting behaviors.&lt;/p&gt;
&lt;p&gt;Functions that can accept other functions as arguments are also called higher-order functions. They are a necessity for the functional programming style.&lt;/p&gt;
&lt;p&gt;The classical example for higher-order functions in Python is the built-in map function. It takes a function and an iterable and calls the function on each element in the iterable, yielding the results as it goes along.&lt;/p&gt;
&lt;p&gt;Here’s how you might format a sequence of greetings all at once by mapping the yell function to them:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; list(map(yell, [&amp;#39;hello&amp;#39;, &amp;#39;hey&amp;#39;, &amp;#39;hi&amp;#39;]))
[&amp;#39;HELLO!&amp;#39;, &amp;#39;HEY!&amp;#39;, &amp;#39;HI!&amp;#39;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;map has gone through the entire list and applied the yell function to each element.
Functions Can Be Nested&lt;/p&gt;
&lt;p&gt;Python allows functions to be defined inside other functions. These are often called nested functions or inner functions. Here’s an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def speak(text):
    def whisper(t):
        return t.lower() + &amp;#39;...&amp;#39;
    return whisper(text)

&amp;gt;&amp;gt;&amp;gt; speak(&amp;#39;Hello, World&amp;#39;)
&amp;#39;hello, world...&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, what’s going on here? Every time you call speak it defines a new inner function whisper and then calls it.&lt;/p&gt;
&lt;p&gt;And here’s the kicker—whisper does not exist outside speak:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; whisper(&amp;#39;Yo&amp;#39;)
NameError: &amp;quot;name &amp;#39;whisper&amp;#39; is not defined&amp;quot;

&amp;gt;&amp;gt;&amp;gt; speak.whisper
AttributeError: &amp;quot;&amp;#39;function&amp;#39; object has no attribute &amp;#39;whisper&amp;#39;&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But what if you really wanted to access that nested whisper function from outside speak? Well, functions are objects—you can return the inner function to the caller of the parent function.&lt;/p&gt;
&lt;p&gt;For example, here’s a function defining two inner functions. Depending on the argument passed to top-level function it selects and returns one of the inner functions to the caller:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def get_speak_func(volume):
    def whisper(text):
        return text.lower() + &amp;#39;...&amp;#39;
    def yell(text):
        return text.upper() + &amp;#39;!&amp;#39;
    if volume &amp;gt; 0.5:
        return yell
    else:
        return whisper
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice how get_speak_func doesn’t actually call one of its inner functions—it simply selects the appropriate function based on the volume argument and then returns the function object:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; get_speak_func(0.3)
&amp;lt;function get_speak_func.&amp;lt;locals&amp;gt;.whisper at 0x10ae18&amp;gt;

&amp;gt;&amp;gt;&amp;gt; get_speak_func(0.7)
&amp;lt;function get_speak_func.&amp;lt;locals&amp;gt;.yell at 0x1008c8&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Of course you could then go on and call the returned function, either directly or by assigning it to a variable name first:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; speak_func = get_speak_func(0.7)
&amp;gt;&amp;gt;&amp;gt; speak_func(&amp;#39;Hello&amp;#39;)
&amp;#39;HELLO!&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let that sink in for a second here… This means not only can functions accept behaviors through arguments but they can also return behaviors. How cool is that?&lt;/p&gt;
&lt;p&gt;You know what, this is starting to get a little loopy here. I’m going to take a quick coffee break before I continue writing (and I suggest you do the same.)
Functions Can Capture Local State&lt;/p&gt;
&lt;p&gt;You just saw how functions can contain inner functions and that it’s even possible to return these (otherwise hidden) inner functions from the parent function.&lt;/p&gt;
&lt;p&gt;Best put on your seat belts now because it’s going to get a little crazier still—we’re about to enter even deeper functional programming territory. (You had that coffee break, right?)&lt;/p&gt;
&lt;p&gt;Not only can functions return other functions, these inner functions can also capture and carry some of the parent function’s state with them.&lt;/p&gt;
&lt;p&gt;I’m going to slightly rewrite the previous get_speak_func example to illustrate this. The new version takes a “volume” and a “text” argument right away to make the returned function immediately callable:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def get_speak_func(text, volume):
    def whisper():
        return text.lower() + &amp;#39;...&amp;#39;
    def yell():
        return text.upper() + &amp;#39;!&amp;#39;
    if volume &amp;gt; 0.5:
        return yell
    else:
        return whisper

&amp;gt;&amp;gt;&amp;gt; get_speak_func(&amp;#39;Hello, World&amp;#39;, 0.7)()
&amp;#39;HELLO, WORLD!&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Take a good look at the inner functions whisper and yell now. Notice how they no longer have a text parameter? But somehow they can still access the text parameter defined in the parent function. In fact, they seem to capture and “remember” the value of that argument.&lt;/p&gt;
&lt;p&gt;Functions that do this are called lexical closures (or just closures, for short). A closure remembers the values from its enclosing lexical scope even when the program flow is no longer in that scope.&lt;/p&gt;
&lt;p&gt;In practical terms this means not only can functions return behaviors but they can also pre-configure those behaviors. Here’s another bare-bones example to illustrate this idea:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def make_adder(n):
    def add(x):
        return x + n
    return add

&amp;gt;&amp;gt;&amp;gt; plus_3 = make_adder(3)
&amp;gt;&amp;gt;&amp;gt; plus_5 = make_adder(5)

&amp;gt;&amp;gt;&amp;gt; plus_3(4)
7
&amp;gt;&amp;gt;&amp;gt; plus_5(4)
9
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example make_adder serves as a factory to create and configure “adder” functions. Notice how the “adder” functions can still access the n argument of the make_adder function (the enclosing scope).
Objects Can Behave Like Functions&lt;/p&gt;
&lt;p&gt;Objects aren’t functions in Python. But they can be made callable, which allows you to treat them like functions in many cases.&lt;/p&gt;
&lt;p&gt;If an object is callable it means you can use round parentheses () on it and pass function call arguments to it. Here’s an example of a callable object:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class Adder:
    def __init__(self, n):
         self.n = n
    def __call__(self, x):
        return self.n + x

&amp;gt;&amp;gt;&amp;gt; plus_3 = Adder(3)
&amp;gt;&amp;gt;&amp;gt; plus_3(4)
7
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Behind the scenes, “calling” an object instance as a function attempts to execute the object’s &lt;strong&gt;call&lt;/strong&gt; method.&lt;/p&gt;
&lt;p&gt;Of course not all objects will be callable. That’s why there’s a built-in callable function to check whether an object appears callable or not:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; callable(plus_3)
True
&amp;gt;&amp;gt;&amp;gt; callable(yell)
True
&amp;gt;&amp;gt;&amp;gt; callable(False)
False
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;TLDR&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Everything in Python is an object, including functions. You can assign them to variables, store them in data structures, and pass or return them to and from other functions (first-class functions.)&lt;/li&gt;
&lt;li&gt;First-class functions allow you to abstract away and pass around behavior in your programs.&lt;/li&gt;
&lt;li&gt;Functions can be nested and they can capture and carry some of the parent function’s state with them. Functions that do this are called closures.&lt;/li&gt;
&lt;li&gt;Objects can be made callable which allows you to treat them like functions in many cases.&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>Django 1.11 Release Note a Reading</title><link>https://rezhajul.io/posts/django-1-11-release-note-a-reading/</link><guid isPermaLink="true">https://rezhajul.io/posts/django-1-11-release-note-a-reading/</guid><description>Django 1.11 Release Note a Reading</description><pubDate>Wed, 05 Apr 2017 14:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Django 1.11 is released for the world to use. It comes with a lot of changes, which can take some time to read.&lt;/p&gt;
&lt;p&gt;In this video, GoDjango read all of the release notes so you can just put on some headphones and hit play.&lt;/p&gt;
&lt;p&gt;{{ youtube(id=&amp;quot;O-aTMavulZU&amp;quot;) }}&lt;/p&gt;
</content:encoded></item><item><title>Skip Yaourt Prompts on Arch Linux</title><link>https://rezhajul.io/posts/skip-yaourt-prompts-on-arch-linux/</link><guid isPermaLink="true">https://rezhajul.io/posts/skip-yaourt-prompts-on-arch-linux/</guid><description>Skipping Pain in The Ass Yaourt Prompts</description><pubDate>Mon, 03 Apr 2017 19:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Yaourt is probably the best tool to automatically download and install packages from the Arch User Repository, also known as AUR. It’s really powerful; however, by default, it prompts you a LOT for confirmations of different things, such as checking if you want to install something, if you want to edit the PKGBUILD, etc. As a result, Yaourt is pretty annoying if you’re used to the hands-free nature of most other package managers.&lt;/p&gt;
&lt;p&gt;As it turns out, there is a file you can create called &lt;code&gt;~/.yaourtrc&lt;/code&gt; that can change the behavior of Yaourt.&lt;/p&gt;
&lt;p&gt;To turn off all of the prompts, type the following into a new file called &lt;code&gt;~/.yaourtrc&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;NOCONFIRM=1
BUILD_NOCONFIRM=1
EDITFILES=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The first line will skip the messages confirming if you really want to install the package.&lt;/p&gt;
&lt;p&gt;The second line will skip the messages asking you if you want to continue the build.&lt;/p&gt;
&lt;p&gt;The third and last line will skip the messages asking if you want to edit the PKGBUILD files.&lt;/p&gt;
&lt;p&gt;When you’re done doing this, Yaourt should now stop being a pain to use. Have fun with your hands-free installs!&lt;/p&gt;
</content:encoded></item><item><title>One Hell Named JSON</title><link>https://rezhajul.io/posts/one-hell-named-json/</link><guid isPermaLink="true">https://rezhajul.io/posts/one-hell-named-json/</guid><description>Json with a single quote</description><pubDate>Mon, 03 Apr 2017 15:33:17 GMT</pubDate><content:encoded>&lt;p&gt;Today on my AWS Lambda Python function, it&amp;#39;s suffering from an error &lt;code&gt;ValueError: Expecting property name: line 1 column 2 (char 1)&lt;/code&gt;. This function receives a JSON event from AWS Kinesis and sends it back to Kafka. Apparently this is an error from &lt;code&gt;json.loads&lt;/code&gt; if you have a JSON string with a single quote&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&amp;gt;&amp;gt;&amp;gt; json_string = &amp;quot;{&amp;#39;name&amp;#39;: rezha, &amp;#39;ganteng&amp;#39;: true}&amp;quot;
&amp;gt;&amp;gt;&amp;gt; json.loads(json_string)
# ValueError: Expecting property name: line 1 column 2 (char 1)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To fix this, you could use &lt;code&gt;ast.literal_eval&lt;/code&gt;. &lt;code&gt;ast.literal_eval&lt;/code&gt; safely evaluate an expression node or a string containing a Python literal or container display. The string or node provided may only consist of the following Python literal structures: strings, bytes, numbers, tuples, lists, dicts, sets, booleans, None, bytes and sets. &lt;code&gt;ast.literal_eval&lt;/code&gt; raises an exception if the input isn&amp;#39;t a valid Python datatype, so the code won&amp;#39;t be executed if it&amp;#39;s not.&lt;/p&gt;
&lt;p&gt;Use &lt;code&gt;ast.literal_eval&lt;/code&gt; whenever you need eval. You shouldn&amp;#39;t usually evaluate literal Python statements.&lt;/p&gt;
</content:encoded></item><item><title>Using Google URL Shortener with Your Domain! For Free!!</title><link>https://rezhajul.io/posts/using-google-url-shortener-with-your-domain-for-free/</link><guid isPermaLink="true">https://rezhajul.io/posts/using-google-url-shortener-with-your-domain-for-free/</guid><description>Using Google URL Shortener with Your Domain! For Free!!</description><pubDate>Sat, 18 Mar 2017 17:33:17 GMT</pubDate><content:encoded>&lt;p&gt;I sometimes feel like URL shorteners are some of the most understated tools in internet marketing, and there have been more than a few times that I wished I’d had someone share some advice on URL shorteners.&lt;/p&gt;
&lt;p&gt;For instance:
What do you do with really long links?
What if you want to track the results?
What if the link—long and unwieldy—upstages the content?
What if I am on a platform where every character counts? (on Twitter for instance)&lt;/p&gt;
&lt;h2&gt;What URL Shorteners Do (And Don’t Do)&lt;/h2&gt;
&lt;p&gt;URL shorteners were originally created to address stubborn email systems that wrapped an email after 80 characters and broke any long URLs that might have been in the message. Once Twitter (and other social media) took off and introduced the 140-character limit, that shortened link became even more important.&lt;/p&gt;
&lt;p&gt;It wasn’t long before link shorteners quickly became more than mere URL shorteners.&lt;/p&gt;
&lt;p&gt;They began to allow publishers to track the links they posted with analytics. They keep URLs that are loaded with UTM tracking tags from looking ugly by hiding the length and characters in the UTM tracking system.&lt;/p&gt;
&lt;p&gt;Sounds great, right? Who wouldn’t use this system?&lt;/p&gt;
&lt;h2&gt;Use A Custom Domain With URL Shorteners&lt;/h2&gt;
&lt;p&gt;With services like Domainr and IWantMyName, you can easily get a custom domain to use with link shorteners.&lt;/p&gt;
&lt;p&gt;Why would you want to do this?&lt;/p&gt;
&lt;p&gt;Using a custom domain with your link shortening service is a way to confront the spam and distrust issue.&lt;/p&gt;
&lt;p&gt;Your links become branded as yours. Your brand, your name–it’s carried across into the very links that you are sharing. This helps let people know they aren’t spam. As long as your custom domain relates to your brand and you use it consistently, people will know that the links you are sharing have been vetted by you.&lt;/p&gt;
&lt;p&gt;Also, a custom domain might improve the amount of clicks your link receives. According to RadiumOne, URL shorteners that offer vanity domains can increase sharing up to 25 percent.&lt;/p&gt;
&lt;h2&gt;What do I need?&lt;/h2&gt;
&lt;p&gt;This is the best part, you only need a couple of things which in this day and age you have probably already got access to.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A domain name, I’m going to be using go.rezhajulio.id . This domain should not be used for anything else.&lt;/li&gt;
&lt;li&gt;Hosting space for the domain, this can be Apache or Nginx driven I’ll provide examples for both, you just need to be able to make a change to the vhost configuration. For Apache we can do this easily via the .htaccess file. Nginx people will probably be running a VPS or similar so I’ll assume you have access to create a new vhost configuration.&lt;/li&gt;
&lt;li&gt;A Google account, if you want to have the URLs you generate with goo.gl be unique to your account and kept in your dashboard to track number of clicks etc. Otherwise anonymous access will work just fine to generate the URL.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;I’m ready to go, what’s next?&lt;/h2&gt;
&lt;p&gt;Everything ready to go we can make the magic happen!&lt;/p&gt;
&lt;h3&gt;Apache configuration&lt;/h3&gt;
&lt;p&gt;Apache people create a .htaccess file in the root web directory and add the following lines to it. Of course if you have access to the main vhost configuration then use that to save Apache a little bit of leg work reading in the .htaccess file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;RewriteEngine On
RewriteRule ^(.*)$ http://goo.gl/$1 [L,R=301]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The code above is very simple. This rewrite rule simply takes the requested URL and swaps out your domain for the goo.gl domain and marks it as a permanent (301) redirect.&lt;/p&gt;
&lt;h3&gt;Nginx Configuration&lt;/h3&gt;
&lt;p&gt;Nginx people you need to create a server block, I’ve always followed the sites-available, sites-enabled pattern used by Nginx on Ubuntu as I find this to be the most organised method but do it however you’ve been working so long as Nginx can read this server block it will answer any calls to the domain.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;server {
  server_name go.rezhajulio.id;
  rewrite ^ http://goo.gl$request_uri permanent;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The server block does the same as the Apache rule above and it redirects any requests onto goo.gl. Simple.&lt;/p&gt;
&lt;h2&gt;How do I use this to make my own short urls?&lt;/h2&gt;
&lt;p&gt;By now you’ve probably figured it out but just in case. Visit goo.gl and sign in if you want to keep statistics on your links otherwise you will just see the input box Paste your long URL here: follow the instructions and paste in your long URL and shorten that URL!&lt;/p&gt;
&lt;p&gt;In return you’ll get a short URL in the form of &lt;a href=&quot;http://goo.gl/ELenCw&quot;&gt;http://goo.gl/ELenCw&lt;/a&gt; all we are interested in is the bit after the domain. We can then append that to our custom domain &lt;a href=&quot;https://go.rezhajulio.id/ELenCw&quot;&gt;https://go.rezhajulio.id/ELenCw&lt;/a&gt; like so and you’re done!&lt;/p&gt;
&lt;p&gt;Now when you use that URL it will first take a trip to your server where it will find the rewrite rules we setup, these will then send the request onto goo.gl to be translated into the long URL and the correct page.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Seriously is that it!?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Yup, that’s it. This method ensures you can still use the analytical data collected by the goo.gl service in their pretty graphs and you get to use your own domain.&lt;/p&gt;
</content:encoded></item><item><title>Lightning Intro To Go</title><link>https://rezhajul.io/posts/lightning-intro-to-go/</link><guid isPermaLink="true">https://rezhajul.io/posts/lightning-intro-to-go/</guid><description>Lightning Intro To Go</description><pubDate>Sun, 22 Jan 2017 02:21:20 GMT</pubDate><content:encoded>&lt;h3&gt;The Go Programming Language&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Go is a compiled programming language in the tradition of C and C++, with static typing, garbage collection, and unique language features enabling concurrent programming&lt;/li&gt;
&lt;li&gt;Latest Release: &lt;a href=&quot;https://golang.org/dl&quot;&gt;1.8rc3&lt;/a&gt; (1.8 will be out soon)&lt;/li&gt;
&lt;li&gt;Developed internally by Google to solve the kind of problems unique to Google (ie, high scale services/systems)&lt;/li&gt;
&lt;li&gt;Designers/developers of Go have deep ties to C/Unix (Ken Thompson, Rob Pike, Robert Griesemer, et al)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hello World in Go&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;package main
import &amp;quot;fmt&amp;quot;

func main() {
    fmt.Println(&amp;quot;hello world&amp;quot;)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Go Language and Runtime Feature Overview&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Small and powerful standard library&lt;/li&gt;
&lt;li&gt;Garbage collected&lt;/li&gt;
&lt;li&gt;Statically compile (or cross-compile!) and deploy almost anywhere&lt;/li&gt;
&lt;li&gt;Super-fast compiles and single binary deploys&lt;/li&gt;
&lt;li&gt;Language/standard library are UTF-8 native&lt;/li&gt;
&lt;li&gt;Design and behavior of language/standard library is opinionated&lt;/li&gt;
&lt;li&gt;Since v1.5, compiler toolchain is written in Go&lt;/li&gt;
&lt;li&gt;Built in unit testing&lt;/li&gt;
&lt;li&gt;Easy integration with C code/libraries&lt;/li&gt;
&lt;li&gt;Less-is-more!&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Installing Go&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Download binaries from &lt;a href=&quot;https://golang.org/dl&quot;&gt;golang.org/dl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Or, install from source:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/bin/bash

sudo -s
cd /usr/local/

export GOROOT_BOOTSTRAP=/usr/local/go-1.7.4

git clone https://go.googlesource.com/go
cd go/src &amp;amp;&amp;amp; git checkout go1.8rc3
./all.bash

export PATH=$PATH:/usr/local/go/bin
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;IDEs&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;VSCode - is a new, but strong, cross-platform IDE built by Microsoft, and works well with Go!&lt;/li&gt;
&lt;li&gt;JetBrains Gogland - new, upcoming JetBrains IDE for Go&lt;/li&gt;
&lt;li&gt;Plugins available for most other IDEs/editors -- Sublime, IntelliJ, etc&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Installing VS Code with Go&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://code.visualstudio.com/&quot;&gt;Download Installer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Open a Go file&lt;/li&gt;
&lt;li&gt;Install the recommended Go extension&lt;/li&gt;
&lt;li&gt;Write code!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/2017/01/vscode0.png&quot; alt=&quot;&quot;&gt;
&lt;img src=&quot;../../assets/images/2017/01/vscode1.png&quot; alt=&quot;&quot;&gt;
&lt;img src=&quot;../../assets/images/2017/01/vscode2.png&quot; alt=&quot;&quot;&gt;
&lt;img src=&quot;../../assets/images/2017/01/vscode3.png&quot; alt=&quot;&quot;&gt;
&lt;img src=&quot;../../assets/images/2017/01/vscode4.png&quot; alt=&quot;&quot;&gt;
&lt;img src=&quot;../../assets/images/2017/01/vscode5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Run/Build/Install command line example:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Run:
&lt;code&gt;$ go run hello.go&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build and Execute:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ go build hello.go
$ ls
hello hello.go
$ ./hello
hello world
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install (puts hello in &lt;code&gt;$GOPATH/bin/&lt;/code&gt;):
&lt;code&gt;$ go install&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Packages and go get&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go code is grouped in &amp;quot;packages&amp;quot;: a directory containing one or more .go files&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Packages are retrievable via go get:
&lt;code&gt;$ go get -u github.com/knq/baseconv&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The above will fetch the Go Git repository and store it in $GOPATH/src/$REPO:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cd $GOPATH/src/github.com/knq/baseconv
$ ls
baseconv.go  baseconv_test.go  coverage.out  coverage.sh  example  LICENSE  old  README.md
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A package may have any number of sub directories each of which is its own package, ie:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;github.com/knq/baseconv              // would import &amp;quot;baseconv&amp;quot;
github.com/knq/baseconv/subpackage   // would import &amp;quot;subpackage&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Package Imports, and Visibility&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Packages (ie, libraries) can be imported into the current package:
&lt;code&gt;import  &amp;quot;github.com/knq/baseconv&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Only func&amp;#39;s and type&amp;#39;s defined in that package beginning with a capital letter are visible when imported:
&lt;code&gt;func doSomething() {} // not visible to other packages&lt;/code&gt;
&lt;code&gt;func DoSomething() {} // visible to other packages&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;import (
    &amp;quot;fmt&amp;quot;
)
fmt.Println(&amp;quot;foobar&amp;quot;)
fmt.print(&amp;quot;hello&amp;quot;) // compiler error
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When in doubt, start the name with a capital&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can define an import alias for a package:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;import (
    j &amp;quot;github.com/knq/baseconv&amp;quot;
)
// baseconv&amp;#39;s exported funcs/types are now available under &amp;#39;j&amp;#39;:
j.Decode62()
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Some packages need to be imported for their side-effect:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;import (
    // imports the postgres database driver package
    _ &amp;quot;github.com/lib/pq&amp;quot;
)
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Building, Testing, and Installing a Go package from command line&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cd $GOPATH/src/github.com/knq/baseconv/
$ go build
$ go test -v
=== RUN   TestErrors
--- PASS: TestErrors (0.00s)
=== RUN   TestConvert
--- PASS: TestConvert (0.00s)
=== RUN   TestEncodeDecode
--- PASS: TestEncodeDecode (0.00s)
PASS
ok      github.com/knq/baseconv 0.002s
$ go install
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Some Notes on Go&amp;#39;s Syntax/Design&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Go designers have purposefully omitted many common features in other languages in the interest of simplicity and readability above almost all else&lt;/li&gt;
&lt;li&gt;If it can already be done through some other feature available to the language, then there is not a need for a specific language feature&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Quick Syntax Primer&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Go is C-like, but:&lt;/li&gt;
&lt;li&gt;No semicolons -- every line break implies a semicolon&lt;/li&gt;
&lt;li&gt;Variable names come before the type&lt;/li&gt;
&lt;li&gt;Braces are required for control statements (for, if, switch, ...)&lt;/li&gt;
&lt;li&gt;Parentheses are not used in control statements&lt;/li&gt;
&lt;li&gt;Typing is implicit in assignments&lt;/li&gt;
&lt;li&gt;Unused import or variable is a compiler error&lt;/li&gt;
&lt;li&gt;Trailing commas are required&lt;/li&gt;
&lt;li&gt;Standard syntax formatting that is applied automatically with gofmt&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;C vs Go, a simple comparison&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Example of printing all command line arguments in C and Go:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-c&quot;&gt;#include &amp;lt;stdio.h&amp;gt;

int main(int argc, char **argv) {
    for (int i = 0; i &amp;lt; argc; i++) {
        printf(&amp;quot;&amp;gt;&amp;gt; arg %d: %s\n&amp;quot;, i, argv[i]);
    }
    return 0;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;package main

import (
	&amp;quot;fmt&amp;quot;
	&amp;quot;os&amp;quot;
)

func main() {
	for i, a := range os.Args {
		fmt.Printf(&amp;quot;&amp;gt;&amp;gt; arg %d: %s\n&amp;quot;, i, a)
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Standard Types (builtin)&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;var b bool = false
var s string = &amp;quot;&amp;quot;
var b byte = 0 // alias for uint8

// -1 0
int   uint
int8  uint8
int16 uint16
int32 uint32
int64 uint64

// 0.0  1.0i ...
float32 complex64
float64 complex128

// &amp;#39;c&amp;#39;
rune // alias for int32

uintptr // an integer type that is large enough to hold the bit pattern of any pointer.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Expressions and Assignments&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Variable names can include any UTF-8 code point:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;var a = &amp;quot;&amp;quot;                  // variable &amp;quot;a&amp;quot;    is string    value &amp;quot;&amp;quot;
var 世界 = 15                // variable &amp;quot;世界&amp;quot; is int       value 15
var f bool                  // variable &amp;quot;f&amp;quot;    is bool      value false
b := &amp;quot;awesome&amp;quot;              // variable &amp;quot;b&amp;quot;    is string    value &amp;quot;awesome&amp;quot;
b = &amp;quot;different string&amp;quot;      // variable &amp;quot;b&amp;quot;    is assigned  value &amp;quot;different string&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expressions:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;a := b * c // a is assigned the value of b * c
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports multiple return values from functions:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;func someFunc() (int, int) { return 7, 10 }
a, b := someFunc() // a = 7, b = 10
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Special typeless variable _ can be used as placeholder in an assignment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;a, b, _ = anotherFunc() // the third return value of anotherFunc will be ignored
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Expressions and Operators&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Usual operators:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/2017/01/operator.png&quot; alt=&quot;alt text&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Note: operators are only available as a single expression (cannot be inlined), ie:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;// valid
i++
j--

// not valid
j[i++]
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Otherwise, operators work as expected:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;j *= 10
i = i + 15
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Constants&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go declares constants using the keyword &amp;quot;const&amp;quot;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;const (
    MyString string = &amp;quot;foobar&amp;quot;
)
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A const can be any expression:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;const (
    // typed const
    MyConst string = &amp;quot;hello&amp;quot;

    // not typed
    MyOtherConst = 0
)
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;iota is special value for incrementing constants:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;const (
    MyConstA = iota // 0
    MyConstB        // 1
    MyConstC        // 2
)
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Slices, Maps, Arrays&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;There are fixed-length arrays, but rarely used:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;var a [8]byte
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A slice provides a dynamic list of any type:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;var a = []int{15, 20, 9}
for i := range a {
    fmt.Printf(&amp;quot;&amp;gt;&amp;gt; %d\n&amp;quot;, a[i])
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Maps (dictionaries/hashes in other languages) provide a robust map of key to value pairs for any type:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;var a = map[string]int{
    &amp;quot;foo&amp;quot;: 10,
    &amp;quot;bar&amp;quot;: 15,
}
for k, v := range a {
    fmt.Printf(&amp;quot;&amp;gt;&amp;gt; %s: %d\n&amp;quot;, k, v)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;make and new&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;make is used to allocate either a slice, map or channel with a size:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;a := make([]string, 15)              // a has type &amp;#39;[]string&amp;#39; and initial length of 15
b := make(map[string]interface{}, 0) // b has type &amp;#39;map[string]interface{}&amp;#39;
c := make(chan *Point)               // c has type &amp;#39;chan *Point&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;new allocates a new instance of the type and returns a pointer to the allocated instance:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;b := new(Point) // b has type &amp;#39;*Point&amp;#39;
p := &amp;amp;Point{}   // more or less the same as new(Point)
i := new(int)   // i has type &amp;#39;*int&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;append, len, and reslicing&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;append is used to append to a slice&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;b := []string{&amp;quot;foo&amp;quot;, &amp;quot;bar&amp;quot;}
b = append(b, &amp;quot;another string&amp;quot;) // b is now []string{&amp;quot;foo&amp;quot;, &amp;quot;bar&amp;quot;, &amp;quot;another string&amp;quot;}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;len provides the length for slices, maps, or strings:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;a := map[string]int{0, 12}
b := []int{14, 13, 3}
len(a)       // 2
len(b)       // 3
len(&amp;quot;hello&amp;quot;) // 5
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any slice or array can be resliced:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;a := []string{&amp;quot;foo&amp;quot;, &amp;quot;bar&amp;quot;, &amp;quot;make&amp;quot;, &amp;quot;new&amp;quot;}
b := a[:1]  // slice a from 0 to 1      -- b is []string{&amp;quot;foo&amp;quot;}
c := a[1:3] // slice a from 1 to 3      -- c is []string{&amp;quot;bar&amp;quot;, &amp;quot;make&amp;quot;}
d := a[1:]  // slice a from 1 to len(a) -- d is []string{&amp;quot;bar&amp;quot;, &amp;quot;make&amp;quot;, &amp;quot;new&amp;quot;}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;func&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Functions are declared with func, and the return type follows the parameter list:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;func empty() {}                                       // no return value
func doNothing(a string, c int) error { return nil }  // returns error
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A func can be assigned to a variable:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;func someFuncName() error { return nil }
a := someFuncName // a has type &amp;#39;func() error&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A func can also be declared inline:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;func main() {
    g := func() {
        doSomething()
    }
    g()
    func(b int) {
      fmt.Printf(&amp;quot;%d\n&amp;quot;, b)
    }(10)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Control Statements&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;if/else if/else:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;if cond {
    expr
} else if cond {
    expr
} else {
    expr
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;switch/case/default:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;switch v {
case &amp;quot;e&amp;quot;:
    // something
default:
    // default
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;switch does not require break statements and cases do not automatically fallthrough&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Control Statements&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;switch as replacement for complex if/else chains:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;switch {
case i == 0 &amp;amp;&amp;amp; err != nil:
    // something
case i == 6:
    // something
case j == 9:
    // something
default:
    // default
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;select is like switch, but waits on a channel:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;select {
case a := &amp;lt;-c:
    // read a from channel c
case &amp;lt;-time.After(15*time.Second):
    // a &amp;#39;timeout&amp;#39; after 15 seconds
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Control Statements&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Go, &amp;quot;while&amp;quot; is spelled &amp;quot;for&amp;quot; -- the only provided loop is for:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;for cond {
}

for {
    if !cond {
        break
    }
}

loop:
    for i := 0; i &amp;lt; len(someSlice); i++ {
        for {
            if a == 15 {
                break loop
            }
        }
    }

for key, value := range someSlice {
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Variadic parameters&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;func can have variable args (variadic)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Must be last parameter&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Special symbol ... to indicate expansion:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;func doSomething(prefix string, intList ...int) {
    for m, n := range intList {
        fmt.Printf(&amp;quot;&amp;gt; %s (%d): %d\n&amp;quot;, prefix, m, n)
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can be used also in append statements:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;strList := []string{&amp;quot;bar&amp;quot;}
j := append([]string{&amp;quot;foo&amp;quot;}, strList...) // j is []string{&amp;quot;foo&amp;quot;, &amp;quot;bar&amp;quot;}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Type Declaration&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No classes or objects&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;struct provides compound (&amp;quot;structured&amp;quot;) types:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;type Point struct {
    X, Y float64
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and interface defines a set of func&amp;#39;s:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;type Reader interface {
    Read([]byte) (int, error)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can create a type copy for any other type:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;type MyUnsignedInteger uint32
type MyPoint Point
type MyReader Reader
type MyFunc func(string) error
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Notes on Types&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Same export / visibility rules apply for struct members:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;type Point struct {
    X, Y float64
    j    int
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Only Go code in the same package as Point can see Point.j&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type conversions (casts) are always explicit!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;type MyStr string
a := MyStr(&amp;quot;blah&amp;quot;)       // a is of type MyStr and has value &amp;quot;blah&amp;quot;
var b string = a         // compiler error
var c string = string(a) // c is of type string and has value &amp;quot;blah&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Type Receivers&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A func can be given a receiver:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;type MyType struct {
    MyValue int
}
func (mt MyType) AddOne() int {
    return mt.MyValue+1
}

type MyString string
func (ms MyString) String() string {
    return string(ms)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A func with a pointer receiver, allows the func to modify the value of the receiver:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;// Increment increments MyType&amp;#39;s MyValue and returns the result.
func (mt *MyType) Increment() int {
    mt.MyValue++
    return mt.MyValue
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;About interface&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Unlike Java or other object-oriented languages, there is no need to explicitly declare a type as having an interface:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;type Reader interface {
    Read([]byte) (int, error)
}

type MyReader string

// Read satisfies the Reader interface.
func (mr MyReader) Read([]byte) (int, error) { /* ... */ }

// DoSomething does something with a Reader.
func DoSomething(r Reader) { /* ... */ }

func main() {
    s := MyReader(&amp;quot;hello&amp;quot;)
    DoSomething(s)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Pointers, Interfaces, and nil&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go pointers are similar to pointers in C/C++ (address to a variable), but there is no pointer math in Go!&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The . operator is used for both pointer dereference and for accessing a member variable:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;type Point { X, Y int }

a := Point{10, 20}        // a has type &amp;#39;Point&amp;#39; with value {X: 10, Y: 20}
b := &amp;amp;Point{X: 30, Y: 40} // b has type &amp;#39;*Point&amp;#39; and **points** to value {X: 30, Y: 40}
*b = Point{Y: 80}         // b now points to value {X: 0, Y: 80}

// . is used to access struct members for both a and b:
fmt.Printf(&amp;quot;(%d %d) (%d %d)&amp;quot;, a.X, a.Y, b.X, b.Y) // prints &amp;quot;(10 20) (0 80)&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any type pointer or interface can be assigned the nil value:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;type Reader interface{}
var a *Point = nil
var b MyReader = nil
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Goroutines and Channels&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Killer features of Go that provide lightweight concurrency in any Go application&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any func in Go can be a goroutine:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;func main() {
    for i := 0; i &amp;lt; 10; i++ {
        go func(z int) {
            fmt.Printf(&amp;quot;&amp;gt;&amp;gt; %d\n&amp;quot;, z)
        }(i)
    }
    time.Sleep(1*time.Second)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Channels are a unique feature in Go that provides type safe memory sharing&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;c := make(chan int)
c &amp;lt;- 10  // write 10 to c
j := &amp;lt;-c // read int from c

// channels can be read or write only:
var c &amp;lt;-chan int // read only chan
var d chan&amp;lt;- int // write only chan
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Goroutine and Channel example&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;package main

import (
    &amp;quot;fmt&amp;quot;
    &amp;quot;sync&amp;quot;
    &amp;quot;time&amp;quot;
)

func main() {
    var wg sync.WaitGroup

    c := make(chan int)
    for i := 0; i &amp;lt; 10; i++ {
        wg.Add(1)
        go func(z int) {
            defer wg.Done()
            time.Sleep(1 * time.Second)
            c &amp;lt;- z
        }(i)
    }

    wg.Add(1)
    go func() {
        defer wg.Done()
        for {
            select {
            case z := &amp;lt;-c:
                fmt.Printf(&amp;quot;&amp;gt;&amp;gt; z: %d\n&amp;quot;, z)
            case &amp;lt;-time.After(5 * time.Second):
                return
            }
        }
    }()

    wg.Wait()
    fmt.Println(&amp;quot;done&amp;quot;)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Handling Errors&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No try/catch equivalent&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Breaking flow should be done by checking for an error:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;func MyFunc() error {
    return errors.New(&amp;quot;error encountered&amp;quot;)
}

err := MyFunc()
if err != nil {
    // handle error
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Utility func in the standard library fmt package:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;fmt.Errorf(&amp;quot;encountered: %d at %s&amp;quot;, line, str) // returns a error
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Error Types&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;error is a special Go interface:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;interface error {
    Error() string
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any type can be an error by satisfying the error interface:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;type MyError struct {}
func (me *MyError) Error() string {
    return &amp;quot;my error&amp;quot;
}

func doSomething() error {
    return &amp;amp;MyError{}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;defer&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;defer is a great feature of Go that executes the func when the parent func returns:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;func doSomething() error {
    db, err := sql.Open(/* ... */)
    if err != nil {
        return err
    }
    defer db.Close()

    err = db.Exec(&amp;quot;DELETE ...&amp;quot;)
    /* ... */
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;panic and recover&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;panic allows immediate halt of the current goroutine:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;panic(&amp;quot;some error&amp;quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;recover() can only be called in defer&amp;#39;d func&amp;#39;s, but allows recovery after a panic:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;func myFunc() {
    defer func() {
        if e := recover(); e != nil {
            log.Printf(&amp;quot;run time panic: %v&amp;quot;, e)
        }
    }()

    panic(&amp;quot;my panic&amp;quot;)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Note: panic&amp;#39;s should not be used unless absolutely necessary.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Quick Overview of the Standard Library&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;```go
import (
    &amp;quot;fmt&amp;quot;      // string formatting
    &amp;quot;strings&amp;quot;  // string manipulation
    &amp;quot;strconv&amp;quot;  // string conversion to standard types
    &amp;quot;io&amp;quot;       // system input/output package
    &amp;quot;sync&amp;quot;     // synchronization primitives
    &amp;quot;time&amp;quot;     // robust time handling/formatting
    &amp;quot;net/http&amp;quot; // http package supporting http servers and clients

    &amp;quot;database/sql&amp;quot;  // standardized sql interface
    &amp;quot;encoding/json&amp;quot; // json encoding/decoding (also packages for xml, csv, etc)

    // template libraries for text and html
    &amp;quot;text/template&amp;quot;
    &amp;quot;html/template&amp;quot;

    // cryptographic libs
    &amp;quot;crypto/rsa&amp;quot;
    &amp;quot;crypto/elliptic&amp;quot;

    &amp;quot;reflect&amp;quot;       // go runtime introspection / reflection
    &amp;quot;regexp&amp;quot;        // regular expressions
)
// And many many many more!
```
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;What Go doesn&amp;#39;t have (and why this is a good thing)&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Generics&lt;/li&gt;
&lt;li&gt;Implicit comparisons&lt;/li&gt;
&lt;li&gt;Overloading / Inheritance&lt;/li&gt;
&lt;li&gt;Objects&lt;/li&gt;
&lt;li&gt;Ternary operator (?:)&lt;/li&gt;
&lt;li&gt;Miscellany data structures (vector, set, etc)&lt;/li&gt;
&lt;li&gt;Package manager&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Working Example&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;package main

import (
	&amp;quot;encoding/json&amp;quot;
	&amp;quot;fmt&amp;quot;
	&amp;quot;io/ioutil&amp;quot;
	&amp;quot;net/http&amp;quot;
	&amp;quot;os&amp;quot;
)

func main() {
	// read from web
	res, err := http.Get(&amp;quot;http://ifconfig.co/json&amp;quot;)
	if err != nil {
		fmt.Fprintf(os.Stderr, &amp;quot;error: %v&amp;quot;, err)
		os.Exit(1)
	}
	defer res.Body.Close()

	// read the body
	body, err := ioutil.ReadAll(res.Body)
	if err != nil {
		fmt.Fprintf(os.Stderr, &amp;quot;error: %v&amp;quot;, err)
		os.Exit(1)
	}

	// decode json
	var vals map[string]interface{}
	err = json.Unmarshal(body, &amp;amp;vals)
	if err != nil {
		fmt.Fprintf(os.Stderr, &amp;quot;error: %v&amp;quot;, err)
		os.Exit(1)
	}

	for k, v := range vals {
		fmt.Fprintf(os.Stdout, &amp;quot;%s: %v\n&amp;quot;, k, v)
	}
}
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Do Schemaless Databases Really Exist?</title><link>https://rezhajul.io/posts/do-schemaless-databases-really-exist/</link><guid isPermaLink="true">https://rezhajul.io/posts/do-schemaless-databases-really-exist/</guid><description>There’s no such thing as a schemaless database</description><pubDate>Sun, 15 Jan 2017 17:33:17 GMT</pubDate><content:encoded>&lt;p&gt;There’s no such thing as a schemaless database. I know, lots of people want a schemaless database, and lots of companies are promoting their products as schemaless DBMSs. And schemaless DBMSs exist. But &lt;em&gt;schemaless databases&lt;/em&gt; are mythical beasts because there is always a schema somewhere. Usually in multiple places, which I will later claim is what causes grief.&lt;/p&gt;
&lt;h3&gt;There Is Always A Schema&lt;/h3&gt;
&lt;p&gt;We should define “schema” first. It comes from Greek roots, meaning “form, figure” according to my dictionary. Wikipedia says, roughly,&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A database schema is its structure; a set of integrity constraints imposed on a database. These integrity constraints ensure compatibility between parts of the schema.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In other words, a schema expresses expectations about what fields exist in a database, and what their types will be. It also enforces those expectations, at least to some extent (there’s usually some flexibility).&lt;/p&gt;
&lt;p&gt;My claim is that there’s always a schema, because somewhere, something has expectations about what’s in a database. At least, any useful, practical, real database. The DBMS itself may not have such expectations, but something else does.&lt;/p&gt;
&lt;h3&gt;Schema In The Database&lt;/h3&gt;
&lt;p&gt;When the DBMS enforces the schema, then we say the schema is in the database. If you’re using MySQL and you try to insert a value into a column that doesn’t exist, you’ll get an error like this:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ERROR 1054 (42S22): Unknown column &amp;#39;flavor&amp;#39; in &amp;#39;field list&amp;#39;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Whoops. I’ll have to run an ALTER TABLE if I want to do that.&lt;/p&gt;
&lt;h3&gt;Schema In The Code&lt;/h3&gt;
&lt;p&gt;When I used MongoDB, I wouldn’t have this problem. I could write my code to insert flavor fields in documents, and read back those documents and do something with the flavor field. I don’t have to have the schema in the database (the DBMS doesn’t have to enforce it).&lt;/p&gt;
&lt;p&gt;Now my schema is in my code, isn’t it? I can’t do anything useful with something’s flavor attribute unless the code knows it’s there. You could argue that maybe my code doesn’t have to know about it; perhaps it just mindlessly accesses whatever it finds and lets something else do what it pleases with it. In that case, though, the schema is in the client application or user. The buck has to stop somewhere.&lt;/p&gt;
&lt;p&gt;It reminds me of the semantic web, microformats, and the like. All very nice, but somewhere, something or someone has to know what a person is, what an address is, what a song is, what an album and artist is. It can’t be infinite turtles all the way down, can it?&lt;/p&gt;
&lt;h3&gt;Schema In Both Places&lt;/h3&gt;
&lt;p&gt;I’ve just claimed that the schema is in the code if it’s not in the DBMS. If I use MySQL and add a flavor column to the table, then my DBMS knows that this attribute is valid. But even when the DBMS has the schema, the code does too. If my code doesn’t know, respect, and agree with the schema in the DBMS, then we’re going to have problems like the Unknown column error above.&lt;/p&gt;
&lt;p&gt;This is where the fallacy enters, in my opinion. People say their database has no schema, is unstructured, etc. It would be more accurate to say “there is no single centralized schema definition. It is scattered throughout my code.”&lt;/p&gt;
&lt;p&gt;Is that a bad thing?&lt;/p&gt;
&lt;p&gt;In my opinion, no. A strongly enforced central definition is a dependency that doesn’t scale well, in human terms. Large codebases end up with dependencies on centralized schema definitions that are brittle and require lots of things to be updated at a single time, instead of allowing the code to cope with a fluid and evolving schema definition and gradually be updated.&lt;/p&gt;
&lt;p&gt;I remember working at an ecommerce website that had many hundreds of databases, thousands of tables, and if I recall correctly, millions of stored procedures. We used a vendor tool to scan all our source code and databases and show us graphs of the relationships between all these things. After months of waiting for the indexing to complete, we opened up the application and the moment of truth arrived. “Let’s look at the order inventory table,” someone suggested. A glorious hairball emerged, slowly painting line after line until the screen was just a big black blob. It was useless and just told us what we already knew: the schema of the order inventory table was expressed in so many places, a change to it was probably impossible. I don’t know, but I’d bet a donut it hasn’t changed since then.&lt;/p&gt;
&lt;p&gt;The other point of view on this is that the database’s job is to define the data and ensure only valid data is entered. I know this is a common point of pride among people who like PostgreSQL better than MySQL. And it’s surely valid, as well. It’s true that if the DBMS is permissive, you can end up with garbage in it. But my experience with large applications has been that this feels good at first and then becomes a problem later on. Just my two cents.&lt;/p&gt;
&lt;h4&gt;Conclusion&lt;/h4&gt;
&lt;p&gt;Since this is more or less a rant, I should not go on too much longer. Main points:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A database isn’t just a DBMS and the schema and data in it. The apps that interact with the data are usually part of the database per se, too.&lt;/li&gt;
&lt;li&gt;There’s no such thing as schemaless. The schema is always in the code; the question is whether it’s also centrally enforced in the DBMS.&lt;/li&gt;
&lt;li&gt;My experience has been that centralized schema definitions are harder to scale on large applications and codebases.&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>JS &amp; Dom - Tips &amp; Tricks #3</title><link>https://rezhajul.io/posts/js-dom-tips-tricks-3/</link><guid isPermaLink="true">https://rezhajul.io/posts/js-dom-tips-tricks-3/</guid><description>JS &amp; Dom - Tips &amp; Tricks #3</description><pubDate>Thu, 12 Jan 2017 17:56:15 GMT</pubDate><content:encoded>&lt;p&gt;As you might already know, your browser&amp;#39;s developer tools can be used to debug JavaScript by utilizing breakpoints. Breakpoints allow you to pause script execution as well as step into, out of, and over function calls. Breakpoints are added in your developer tools by means of line numbers where you indicate where you want script execution to be paused.&lt;/p&gt;
&lt;p&gt;This is fine until you start changing your code. Now a defined breakpoint may no longer be in the place you want it to be. Take the following code as an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;function doSomething() {
  console.log(&amp;quot;first log...&amp;quot;);
  console.log(&amp;quot;last log...&amp;quot;);
}

doSomething();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If I open this code in my browser&amp;#39;s developer tools debugger, I can place a breakpoint before one of the lines inside the &lt;code&gt;doSomething()&lt;/code&gt; function. Let&amp;#39;s say I want to pause script execution before the &amp;quot;last log...&amp;quot; message. This would require that I place the breakpoint on that line. In this case, it would be line 3 of my code.&lt;/p&gt;
&lt;p&gt;But what if I add the breakpoint, then I add another line of code before that line?&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;function doSomething() {
  console.log(&amp;quot;first log...&amp;quot;);
  console.log(&amp;quot;second log...&amp;quot;);
  console.log(&amp;quot;last log...&amp;quot;);
}

doSomething();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If I refresh the page, the breakpoint will still be there but now the script will pause execution on the &amp;quot;second log...&amp;quot; message instead of the &amp;quot;last log...&amp;quot; message. Again, this is because the breakpoints are based on line numbers. The debugger is still stopping on line 3, but that&amp;#39;s not what we want.&lt;/p&gt;
&lt;p&gt;Enter JavaScript&amp;#39;s &lt;code&gt;debugger&lt;/code&gt; statement.&lt;/p&gt;
&lt;p&gt;Instead of setting breakpoints directly in the developer tools, I can use the &lt;code&gt;debugger&lt;/code&gt; statement to tell the developer tools where to pause execution:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;function doSomething() {
  console.log(&amp;quot;first log...&amp;quot;);
  debugger;
  console.log(&amp;quot;last log...&amp;quot;);
}

doSomething();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now I can add as much code as I want prior to the &lt;code&gt;debugger&lt;/code&gt; statement and the script will still pause in the right place, without any concerns over changing line numbers.&lt;/p&gt;
&lt;p&gt;If you&amp;#39;ve been writing JavaScript for a while, I&amp;#39;m sure this tip is nothing new to you. But those new to debugging will certainly benefit from using this feature, which is &lt;a href=&quot;https://tc39.github.io/ecma262/#sec-debugger-statement&quot;&gt;part of the official ECMAScript spec&lt;/a&gt;. As expected, inserting a &lt;code&gt;debugger&lt;/code&gt; statement will have no effect on your code if a debugger is not present (e.g. if the developer tools are not open).&lt;/p&gt;
</content:encoded></item><item><title>JS &amp; Dom - Tips &amp; Tricks #2</title><link>https://rezhajul.io/posts/js-dom-tips-tricks-2/</link><guid isPermaLink="true">https://rezhajul.io/posts/js-dom-tips-tricks-2/</guid><description>JS &amp; Dom - Tips &amp; Tricks #2</description><pubDate>Mon, 09 Jan 2017 00:56:15 GMT</pubDate><content:encoded>&lt;p&gt;Last year, David Walsh published an article called &lt;a href=&quot;https://davidwalsh.name/essential-javascript-functions&quot;&gt;7 Essential JavaScript Functions&lt;/a&gt; where he shared some JavaScript utility/helper snippets. One of the interesting techniques used in one of the functions is where he gets the absolute URL for a page. He does this using something like the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;var getAbsoluteUrl = (function () {
  var a;
  return function (url) {
    if (!a) a = document.createElement(&amp;quot;a&amp;quot;);
    a.href = url;
    return a.href;
  };
})();

// Usage
getAbsoluteUrl(&amp;quot;/something&amp;quot;); // https ://example.com/something
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This example reveals a clever little technique. Basically, this function takes advantage of JavaScript&amp;#39;s ability to create elements that don&amp;#39;t actually exist in the DOM. From there, you can get info from that element as you please. In this case, the desired result is the full URL of the page. The browser automatically fills out full URLs in the href value of links, even if they&amp;#39;re written in relative format. So with this we can easily grab the full URL from just a relative page link.&lt;/p&gt;
&lt;p&gt;Similarly, someone in the comments posted the following snippet:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;function isValidEmail(string) {
  string = string || &amp;quot;&amp;quot;;
  var lastseg = (string.split(&amp;quot;@&amp;quot;)[1] || &amp;quot;&amp;quot;).split(&amp;quot;.&amp;quot;)[1] || &amp;quot;&amp;quot;,
    input = document.createElement(&amp;quot;input&amp;quot;);
  input.type = &amp;quot;email&amp;quot;;

  input.required = true;
  input.value = string;
  return !!(string &amp;amp;&amp;amp; input.validity &amp;amp;&amp;amp; input.validity.valid &amp;amp;&amp;amp; lastseg.length);
}

console.log(isValidEmail(&amp;quot;&amp;quot;)); // -&amp;gt; false
console.log(isValidEmail(&amp;quot;asda&amp;quot;)); // -&amp;gt; false
console.log(isValidEmail(&amp;quot;asda@gmail&amp;quot;)); // -&amp;gt; false
console.log(isValidEmail(&amp;quot;asda@gmail.com&amp;quot;)); // -&amp;gt; true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There&amp;#39;s a lot going on there, but the principle is the same. The script creates an email input field, without actually inserting it into the page, and then information is gleaned from the input. The acquired info falls in line with what the browser would do if the email input actually existed and was interacted with on the page. This is along the same lines as what happens with feature detection using a library like Modernizr.&lt;/p&gt;
&lt;p&gt;So the basic lesson here is, if you want to find out info on how a browser handles certain HTML elements, remember that those elements don&amp;#39;t actually have to be in the DOM. You can create them yourself and then run some tests and respond accordingly.&lt;/p&gt;
</content:encoded></item><item><title>JS &amp; Dom - Tips &amp; Tricks #1</title><link>https://rezhajul.io/posts/js-dom-tips-tricks-1/</link><guid isPermaLink="true">https://rezhajul.io/posts/js-dom-tips-tricks-1/</guid><description>JS &amp; Dom - Tips &amp; Tricks #1</description><pubDate>Fri, 06 Jan 2017 00:56:15 GMT</pubDate><content:encoded>&lt;p&gt;There are many ways to read content from elements and nodes. Here&amp;#39;s another one that allows you to read and write an individual node&amp;#39;s value. It uses the &lt;code&gt;nodeValue&lt;/code&gt; property of the Node interface. The &lt;code&gt;nodeValue&lt;/code&gt; of an element will always return &lt;code&gt;null&lt;/code&gt;, whereas text nodes, CDATA sections, comment nodes, and even attribute nodes will return the text value of the node. To demonstrate its use, I&amp;#39;ll use the following HTML:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;lt;body class=&amp;quot;home&amp;quot;&amp;gt;
  &amp;lt;p&amp;gt;Example. &amp;lt;span&amp;gt;Example paragraph one.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
  &amp;lt;p&amp;gt;Example paragraph two.&amp;lt;/p&amp;gt;
  &amp;lt;p&amp;gt;Example paragraph three.&amp;lt;/p&amp;gt;
  &amp;lt;p&amp;gt;Example paragraph four.&amp;lt;/p&amp;gt;
  &amp;lt;!-- comment text --&amp;gt;
&amp;lt;/body&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now I&amp;#39;ll run the following JavaScript to read/write a few of the nodes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;let b = document.body,
  p1 = document.querySelector(&amp;quot;p&amp;quot;);

// reading node values
console.log(b.nodeValue); // null for all elements
console.log(b.attributes[0].nodeValue); // &amp;quot;home&amp;quot;
console.log(b.childNodes[7].nodeValue); // &amp;quot; comment text &amp;quot;

// changing nodeValue of first node inside first paragraph p1.firstChild.nodeValue = &amp;#39;inserted text&amp;lt;br&amp;gt;&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice the second &lt;code&gt;console.log&lt;/code&gt; is displaying the text of an attribute node. This would be the equivalent of using &lt;code&gt;getAttribute()&lt;/code&gt;, with the difference being that &lt;code&gt;getAttribute()&lt;/code&gt; acts on elements, whereas &lt;code&gt;nodeValue&lt;/code&gt; can be applied to any type of node.&lt;/p&gt;
&lt;p&gt;Also notice that I’m using &lt;code&gt;nodeValue&lt;/code&gt; to read the contents of an HTML comment. This is one of many ways you can do this. This would be the equivalent of reading the &lt;code&gt;textContent&lt;/code&gt; property or &lt;code&gt;data&lt;/code&gt; property of the comment node. As you can see from the final line in that code example, I’m able to define the nodeValue of one of the text nodes, so this isn’t read-only.&lt;/p&gt;
&lt;p&gt;A few other things to note regarding defining the &lt;code&gt;nodeValue&lt;/code&gt; property: Setting nodeValue will return the value that was set, and you cannot set the nodeValue when the nodeValue is null (unless of course you change it to null, which is the same as an empty string that’s changeable later).&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/API/Node/nodeValue&quot;&gt;MDN’s article &lt;/a&gt; on &lt;code&gt;nodeValue&lt;/code&gt; has a chart that lists the different node types and what their &lt;code&gt;nodeValue&lt;/code&gt; will return.&lt;/p&gt;
</content:encoded></item><item><title>Queue in Python - Part 3</title><link>https://rezhajul.io/posts/queue-in-python-part-3/</link><guid isPermaLink="true">https://rezhajul.io/posts/queue-in-python-part-3/</guid><description>Queue in Python</description><pubDate>Mon, 26 Dec 2016 11:53:22 GMT</pubDate><content:encoded>&lt;h2&gt;Prioritize your queue&lt;/h2&gt;
&lt;p&gt;A &lt;code&gt;PriorityQueue&lt;/code&gt; is a type of &lt;code&gt;queue&lt;/code&gt; imported from the module with the same name.&lt;/p&gt;
&lt;p&gt;It uses sort order to decide what to retrieve from it first (your object must have a way of comparing its instances):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import queue

class Rezha(object):
    def __init__(self, priority):
       self.priority = priority

    def __lt__(self, other):
       return self.priority &amp;gt; other.priority

q = queue.PriorityQueue()
q.put(Rezha(11))
q.put(Rezha(55))
q.put(Rezha(100))
while not q.empty():
    print(q.get().priority)
# output is 100 / 55 / 11
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Having defined the &lt;code&gt;__lt__&lt;/code&gt; method, our PriorityQueue knows now how to sort elements of type &lt;code&gt;Rezha&lt;/code&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Queue in Python - Part 2</title><link>https://rezhajul.io/posts/queue-in-python-part-2/</link><guid isPermaLink="true">https://rezhajul.io/posts/queue-in-python-part-2/</guid><description>Queue in Python</description><pubDate>Wed, 02 Nov 2016 11:53:22 GMT</pubDate><content:encoded>&lt;h2&gt;Double ended queues with deque&lt;/h2&gt;
&lt;p&gt;The deque class in the collections module makes it easy to create deques or double ended queues. Deques allow you to append and delete elements from both ends more efficiently than in lists.&lt;/p&gt;
&lt;p&gt;Import the module:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from collections import deque
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Instantiate deque:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;d = deque()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Append to right and left:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;d.append(&amp;quot;b&amp;quot;)
d.appendleft(&amp;quot;a&amp;quot;)
print(d)
# output is: deque([&amp;#39;a&amp;#39;, &amp;#39;b&amp;#39;])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the same fashion, elements can be deleted (popped):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;d.pop()
d.popleft()
print(d)
# outputs: deque([])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Starting from Python 2.6 you can limit the maximum number of elements in a deque by passing the maxlen argument to the constructor. If the limit is exceeded, items from the opposite end will be popped as new ones are appended to this end:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;d = deque(maxlen=3)
deque([], maxlen=3)
for i in range(4):
    d.append(i)
    print(d)
...
# Output:
deque([0], maxlen=3)
deque([0, 1], maxlen=3)
deque([0, 1, 2], maxlen=3)
deque([1, 2, 3], maxlen=3)
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Queue in Python - Part 1</title><link>https://rezhajul.io/posts/queue-in-python-part-1/</link><guid isPermaLink="true">https://rezhajul.io/posts/queue-in-python-part-1/</guid><description>Queue in Python</description><pubDate>Wed, 02 Nov 2016 11:00:22 GMT</pubDate><content:encoded>&lt;h2&gt;Best way to implement a simple queue&lt;/h2&gt;
&lt;p&gt;A simple list can be easily used and implemented as a queue abstract data structure. A queue implies the first-in, first-out principle.&lt;/p&gt;
&lt;p&gt;However, this approach will prove inefficient because inserts and pops from the beginning of a list are slow (all elements need shifting by one).&lt;/p&gt;
&lt;p&gt;It&amp;#39;s recommended to implement queues using the collections.deque module as it was designed with fast appends and pops from both ends.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from collections import deque
queue = deque([&amp;quot;a&amp;quot;, &amp;quot;b&amp;quot;, &amp;quot;c&amp;quot;])
queue.append(&amp;quot;d&amp;quot;)
queue.append(&amp;quot;e&amp;quot;)
queue.popleft()
print(queue)
# output is: deque([&amp;#39;c&amp;#39;, &amp;#39;d&amp;#39;, &amp;#39;e&amp;#39;])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A reverse queue can be implemented by opting for appendleft instead of append and pop instead of popleft.&lt;/p&gt;
</content:encoded></item><item><title>Fix Java Unsupported major.minor version 52.0 on Ubuntu</title><link>https://rezhajul.io/posts/fix-java-unsupported-major-minor-version-52-0-on-ubuntu/</link><guid isPermaLink="true">https://rezhajul.io/posts/fix-java-unsupported-major-minor-version-52-0-on-ubuntu/</guid><description>Fix Java Unsupported major.minor version 52.0 on Ubuntu</description><pubDate>Tue, 12 Jul 2016 06:33:05 GMT</pubDate><content:encoded>&lt;p&gt;Today when I run gradle test, I hit &lt;code&gt;Unsupported major.minor version 52.0 error&lt;/code&gt;. It comes when you are trying to run a class compiled using Java 1.8 compiler into a lower JRE version e.g. JRE 1.7 or JRE 1.6. The simplest way to fix this error is to install the latest Java release i.e. Java 8 and run your program.&lt;/p&gt;
&lt;p&gt;The Ubuntu archives have multiple versions of OpenJDK available. One of these is designated as the default and this has the package names default-jdk and default-jre. The java and javac programs will be symlinked to the binaries from this default JDK. On my Ubuntu, the default packages were linked to the openjdk-7-jdk and openjdk-7-jre packages.&lt;/p&gt;
&lt;p&gt;However, you might want to install and use other versions of JDK. For example, to use Java 8 I did:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt install openjdk-8-jdk
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The problem is that java, javac and other binaries still point to the default Java version. To switch the default Java binaries, use the update-java-alternatives tool.&lt;/p&gt;
&lt;p&gt;To list the Java versions installed on your system, use the --list option:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;update-java-alternatives --list
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;java-1.6.0-openjdk-amd64 1061 /usr/lib/jvm/java-1.6.0-openjdk-amd64
java-1.7.0-openjdk-amd64 1071 /usr/lib/jvm/java-1.7.0-openjdk-amd64
java-1.8.0-openjdk-amd64 1069 /usr/lib/jvm/java-1.8.0-openjdk-amd64
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To set one of the above Java versions as the default, use the --set option:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ sudo update-java-alternatives --set java-1.8.0-openjdk-amd64
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Enable Spark Context on Your Ipython Notebook</title><link>https://rezhajul.io/posts/enable-spark-context-on-ipython-notebook/</link><guid isPermaLink="true">https://rezhajul.io/posts/enable-spark-context-on-ipython-notebook/</guid><description>Enable Spark Context on Your Ipython Notebook</description><pubDate>Sat, 25 Jun 2016 08:04:33 GMT</pubDate><content:encoded>&lt;p&gt;When you&amp;#39;re trying Spark with its python repl, it&amp;#39;s really easy to write stuff using a simple function or lambda. However, it will be a pain in the ass when you&amp;#39;re starting to try some complex stuff because you could easily miss something like indentation, etc.&lt;/p&gt;
&lt;p&gt;Try running your pyspark with this command&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;IPYTHON_OPTS=&amp;quot;notebook&amp;quot; path/to/your/pyspark
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It will start an IPython Notebook in your browser with Spark Context as sc variable. You could start using it like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;../../assets/images/2016/06/spark.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
</content:encoded></item><item><title>Fix GDM Service Can&apos;t be Enabled on Arch Linux</title><link>https://rezhajul.io/posts/fix-gdm-service-cant-be-enabled-on-arch-linux/</link><guid isPermaLink="true">https://rezhajul.io/posts/fix-gdm-service-cant-be-enabled-on-arch-linux/</guid><description>Fix GDM Service Can&apos;t be Enabled on Arch Linux</description><pubDate>Wed, 10 Feb 2016 17:01:15 GMT</pubDate><content:encoded>&lt;p&gt;Today I just changed my desktop environment from XFCE to Gnome 3. However GDM service always failed to be enabled. Every time I ran &lt;code&gt;sudo systemctl enable gdm&lt;/code&gt;, it always returned &lt;code&gt;Failed to execute operation: File exists&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This was caused by another display manager still being enabled, but I don&amp;#39;t know which one. Fortunately, running &lt;code&gt;sudo systemctl enable gdm -f&lt;/code&gt; will force GDM to be enabled and another display manager will be disabled.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;Removed symlink /etc/systemd/system/display-manager.service.
Created symlink from /etc/systemd/system/display-manager.service to /usr/lib/systemd/system/gdm.service.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Voila!!&lt;/p&gt;
</content:encoded></item><item><title>Remove Entry from Known Host</title><link>https://rezhajul.io/posts/remove-entry-from-known-host/</link><guid isPermaLink="true">https://rezhajul.io/posts/remove-entry-from-known-host/</guid><description>Remove Entry from Known Host</description><pubDate>Wed, 03 Feb 2016 06:01:14 GMT</pubDate><content:encoded>&lt;p&gt;Sometimes I am moving my domain from one server to another and facing this problem&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;Add correct host key in /home/rezha/.ssh/known_hosts to get rid of this message.
Offending key in /home/rezha/.ssh/known_hosts:60
RSA host key for rezhajulio.id has changed and you have requested strict checking.
Host key verification failed.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I am just too lazy to find line 60 of known_host I ended up deleting the entire known_host.
Use this command to remove entries from known_hosts:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ssh-keygen -R rezhajulio.id
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Change it to your own conflicted hostname or IP address.&lt;/p&gt;
</content:encoded></item><item><title>Installing or Upgrading PostgreSQL 9.4 on Ubuntu 14.04</title><link>https://rezhajul.io/posts/installing-or-upgrading-postgresql-9-4-on-ubuntu-14-04/</link><guid isPermaLink="true">https://rezhajul.io/posts/installing-or-upgrading-postgresql-9-4-on-ubuntu-14-04/</guid><description>Installing or Upgrading PostgreSQL 9.4 on Ubuntu 14.04</description><pubDate>Thu, 20 Aug 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I am currently having a side project that requires JSON Blob feature on PostgreSQL 9.4. But Ubuntu 14.04 only delivers PostgreSQL 9.3 on their repositories. Now I&amp;#39;ll post step by step how to install or upgrade if you are already using PostgreSQL 9.3 on Ubuntu 14.04 to PostgreSQL 9.4. &lt;!--more--&gt;&lt;/p&gt;
&lt;p&gt;Add PostgreSQL 9.4 repository and Import repository signing key, and update the package lists&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo echo &amp;#39;deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main&amp;#39; &amp;gt; /etc/apt/sources.list.d/pgdg.list

wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
  sudo apt-key add -

sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Install PostgreSQL 9.4 and pgAdmin III&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt-get install postgresql-9.4 pgadmin3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check current clusters&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pg_lsclusters
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Stop PostgreSQL 9.4 cluster, drop it, and upgrade PostgreSQL 9.3 cluster to 9.4&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo pg_dropcluster 9.4 main --stop
sudo pg_upgradecluster 9.3 main
sudo pg_dropcluster 9.3 main
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Confirm only 9.4 cluster remains&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pg_lsclusters
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Fix Locale Settings Error On Ubuntu Server 14.04</title><link>https://rezhajul.io/posts/fix-locale-settings-error-on-ubuntu-server-14-04/</link><guid isPermaLink="true">https://rezhajul.io/posts/fix-locale-settings-error-on-ubuntu-server-14-04/</guid><description>Fix Locale Settings Error On Ubuntu Server 14.04</description><pubDate>Wed, 29 Jul 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We are just migrating our cloud from Amazon Web Services to SoftLayer, and we feel really happy with performance improvement. But, somehow the new system has error like this. &lt;!--more--&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = &amp;quot;en_US:&amp;quot;,
	LC_ALL = (unset),
	LC_PAPER = &amp;quot;id_ID.UTF-8&amp;quot;,
	LC_ADDRESS = &amp;quot;id_ID.UTF-8&amp;quot;,
	LC_MONETARY = &amp;quot;id_ID.UTF-8&amp;quot;,
	LC_NUMERIC = &amp;quot;id_ID.UTF-8&amp;quot;,
	LC_TELEPHONE = &amp;quot;id_ID.UTF-8&amp;quot;,
	LC_IDENTIFICATION = &amp;quot;id_ID.UTF-8&amp;quot;,
	LC_MEASUREMENT = &amp;quot;id_ID.UTF-8&amp;quot;,
	LC_CTYPE = &amp;quot;en_US.UTF-8&amp;quot;,
	LC_TIME = &amp;quot;id_ID.UTF-8&amp;quot;,
	LC_NAME = &amp;quot;id_ID.UTF-8&amp;quot;,
	LANG = &amp;quot;en_US&amp;quot;
    are supported and installed on your system.
perl: warning: Falling back to the standard locale (&amp;quot;C&amp;quot;).
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Apparently our OS doesn&amp;#39;t know about en_US.UTF-8, and we can easily fix this just by running these few lines of commands&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;export LANGUAGE=en_US.UTF-8
export LC_ALL=en_US.UTF-8
sudo locale-gen
sudo dpkg-reconfigure locales
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>EAFP Coding Style in Python</title><link>https://rezhajul.io/posts/eafp-coding-style-in-python/</link><guid isPermaLink="true">https://rezhajul.io/posts/eafp-coding-style-in-python/</guid><description>EAFP Coding Style in Python</description><pubDate>Mon, 27 Apr 2015 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;What is EAFP?&lt;/h2&gt;
&lt;p&gt;EAFP (Easier to Ask for Forgiveness than Permission) is a coding style that’s commonly used in Python community. This coding style assumes that needed variables, files, etc. exist. Any problems are caught as exceptions. This results in a generally clean and concise style containing a lot of try and except statements. This technique really contrasts with common style in many other languages like C with the LBYL (Look Before You Leap) approach which is characterized by the presence of many if statements. &lt;!--more--&gt;&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;p&gt;We have some old code on exporting some excel file, if we already have some file with the same name on the temporary folder, we&amp;#39;ll delete it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import os

if os.path.exists(&amp;quot;something.xlsx&amp;quot;):  # violates EAFP coding style
    os.unlink(&amp;quot;something.xslx &amp;quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;EAFP coding style prefers writing code like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import os

try:
    os.unlink(&amp;quot;something.xlsx &amp;quot;)
except OSError:  # raised when file does not exist
    pass
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Unlike the original code, the modified code simply assumes that the needed file exists, and catches any problems as exceptions. In the example above, if the file does not exist, the problem will be caught as an OSError exception.&lt;/p&gt;
</content:encoded></item><item><title>Installing Varnish with nginx on Ubuntu 14.04</title><link>https://rezhajul.io/posts/installing-varnish-with-nginx-on-ubuntu-14-04/</link><guid isPermaLink="true">https://rezhajul.io/posts/installing-varnish-with-nginx-on-ubuntu-14-04/</guid><description>Installing Varnish with nginx on Ubuntu 14.04</description><pubDate>Sun, 26 Apr 2015 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;About Varnish&lt;/h2&gt;
&lt;p&gt;Varnish is an HTTP accelerator and a useful tool for speeding up a server, especially during times when there is high traffic to a site. It works by redirecting visitors to static pages whenever possible and only drawing on the server itself if there is a need for an active process. &lt;!--more--&gt;&lt;/p&gt;
&lt;h2&gt;Installing Varnish&lt;/h2&gt;
&lt;p&gt;I assume you already have a web server working with nginx. On your terminal, run this command one by one&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt-get install apt-transport-https
curl https://repo.varnish-cache.org/ubuntu/GPG-key.txt | apt-key add -
echo &amp;quot;deb https://repo.varnish-cache.org/ubuntu/ trusty varnish-4.0&amp;quot; &amp;gt;&amp;gt; /etc/apt/sources.list.d/varnish-cache.list
sudo apt-get update
sudo apt-get install varnish
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will install the latest varnish 4 to your system. If you don&amp;#39;t add varnish repo, you&amp;#39;ll only get varnish 3 from official ubuntu repo. Just replace trusty with your version of ubuntu if you don&amp;#39;t use 14.04.&lt;/p&gt;
&lt;h2&gt;Configuring Varnish&lt;/h2&gt;
&lt;p&gt;Once you have varnish installed, you can start to configure them to ease the load on your virtual private server.&lt;/p&gt;
&lt;p&gt;Varnish will serve the content on port 80, while fetching it from nginx which will run on port 8080.&lt;/p&gt;
&lt;p&gt;Go ahead and start setting that up by opening the /etc/default/varnish file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo nano /etc/default/varnish
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Find the lines under “DAEMON_OPTS”— in the Alternative 2 section, and change the port number by &amp;quot;-a&amp;quot; to 80. The configuration should match the following code:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt; DAEMON_OPTS=&amp;quot;-a :80 \
             -T localhost:6082 \
             -f /etc/varnish/default.vcl \
             -S /etc/varnish/secret \
             -s malloc,256m&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next edit the file located at &lt;code&gt;/etc/varnish/default.vcl&lt;/code&gt;. Change the port from 80 to 8080 like below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;backend default {
    .host = &amp;quot;127.0.0.1&amp;quot;;
    .port = &amp;quot;8080&amp;quot;;
    .connect_timeout = 60s;
    .first_byte_timeout = 60s;
    .between_bytes_timeout = 60s;
    .max_connections = 800;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Because this server runs WordPress and Ghost, I need to tell Varnish to not cache the admin area of wordpress and ghost. Still on &lt;code&gt;/etc/varnish/default.vcl&lt;/code&gt; I need to modify &lt;code&gt;sub vcl_recv&lt;/code&gt; block into this&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sub vcl_recv {
        if (!(req.url ~ &amp;quot;wp-(login|admin)&amp;quot;)) {
                unset req.http.cookie;
        }

        if (!(req.url ~ &amp;quot;ghost&amp;quot;)) {
                unset req.http.cookie;
        }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now the next step would be to edit the port used by nginx. Each server (if you have it configured that way) would need to be switched from 80 to 8080. The location of my config files are &lt;code&gt;/etc/nginx/sites-available&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Once you have made all of the required changes, restart varnish and nginx.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo service nginx restart
sudo service varnish restart
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Accessing your domain should instantly take you to the varnish cached version. To test, you can run &lt;code&gt;varnishstat&lt;/code&gt;, and that should give you live data as you access the domain affected by varnish.&lt;/p&gt;
</content:encoded></item><item><title>Disabling Apport Error Reporting on Ubuntu</title><link>https://rezhajul.io/posts/disabling-apport-error-reporting-on-ubuntu/</link><guid isPermaLink="true">https://rezhajul.io/posts/disabling-apport-error-reporting-on-ubuntu/</guid><description>Disabling Apport Error Reporting on Ubuntu</description><pubDate>Sat, 25 Apr 2015 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;What is Apport?&lt;/h2&gt;
&lt;p&gt;Apport is an Error Reporting Service provided by Ubuntu to intercept and analyze crashes and bugs as and when they occur. Crashes and Bugs may sound like bad things, but actually most operating systems will have several a day, and it doesn&amp;#39;t mean your computer is broken, nor does it necessarily stop working. As such, Apport can usually be safely disabled, as it doesn&amp;#39;t fix anything, it just tells developers that something went wrong. &lt;!--more--&gt;&lt;/p&gt;
&lt;p&gt;Ubuntu 12.04 is the first release of Ubuntu that ships with Apport Error Reporting enabled by default, and as a result, you may get a large quantity of Internal System Error popups inside Ubuntu. Similar popups may also read Sorry, Ubuntu 12.04 has experienced an internal error. These popups are part of Apport, an internal debugger which automatically generate reports to submit for packages that crash. Many reports can&amp;#39;t be filed or have already been filed, and as such it is usually safe to turn off.&lt;/p&gt;
&lt;h2&gt;Stopping Apport&lt;/h2&gt;
&lt;p&gt;You can stop the currently running Apport service with the following command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo service apport stop
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that unless you remove it or disable it at boot it will start again the next time you turn on your computer.&lt;/p&gt;
&lt;h2&gt;Disable Apport at Boot&lt;/h2&gt;
&lt;p&gt;You need to manually edit a file to Stop Apport Running at Boot (when you turn on your machine). Open the Terminal, and paste the following command with Ctrl+Shift+V, or type it in manually.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo nano /etc/default/apport
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Change the line that says enabled=1 to enabled=0 to disable Apport. To re-enable, change it back.&lt;/p&gt;
&lt;h2&gt;Uninstall Apport&lt;/h2&gt;
&lt;p&gt;It is fairly simple to uninstall Apport, as you can open the Ubuntu Software Centre, search for apport, and simply click Remove.&lt;/p&gt;
&lt;p&gt;A similar process can be used for the package apport in both Synaptic and the Terminal.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt-get purge apport
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Even though it is safe to uninstall apport, I don&amp;#39;t really recommend it, just in case you want to enable it back. Stopping Apport is already enough.&lt;/p&gt;
</content:encoded></item><item><title>Do Git Pull When Git Push Failed</title><link>https://rezhajul.io/posts/do-git-pull-when-git-push-failed/</link><guid isPermaLink="true">https://rezhajul.io/posts/do-git-pull-when-git-push-failed/</guid><description>Do Git Pull When Git Push Failed</description><pubDate>Tue, 31 Mar 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Don&amp;#39;t you hate it when you&amp;#39;re gonna push your code to git repository and find out someone just pushed earlier and you must do git pull and re-push again?&lt;/p&gt;
&lt;p&gt;I made a simple and dumb bash script to do a git pull when git push has failed because you need to git pull first. Well, this is the script. &lt;!--more--&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git push
if [ $? -eq 0 ]
then
    echo &amp;quot;DONE PUSHING&amp;quot;
else
    echo &amp;quot;&amp;quot;
    echo &amp;quot;OH SHIT. SOMEBODY ALREADY PUSH TO REPO&amp;quot;
    echo &amp;quot;&amp;quot;
    git pull
    if [ $? -eq 0 ]
    then
        echo &amp;quot;&amp;quot;
        echo &amp;quot;DONE PULLING. LETS RE-PUSH THAT CODE&amp;quot;
        echo &amp;quot;&amp;quot;
        git push
    else
        echo &amp;quot;&amp;quot;
        echo &amp;quot;OH NO! THERE&amp;#39;S SOME MERGE CONFLICT. FIX IT FIRST!&amp;quot;
        echo &amp;quot;&amp;quot;
    fi
fi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On linux you could just make some alias to run the script. On windows and using CMDER, just edit the aliases file and add &lt;code&gt;gp=sh &amp;quot;path/to/your/file.sh&amp;quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;I hope you find it helpful.&lt;/p&gt;
</content:encoded></item><item><title>Essentials Javascripts Link</title><link>https://rezhajul.io/posts/essentials-javascripts-link/</link><guid isPermaLink="true">https://rezhajul.io/posts/essentials-javascripts-link/</guid><description>Essentials Javascripts Link</description><pubDate>Wed, 25 Feb 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Whether you&amp;#39;re a JavaScript newbie or hero, you&amp;#39;ll find this link really useful. &lt;!--more--&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;https://gist.github.com/ericelliott/d576f72441fc1b27dace
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>PostgreSQL 9.4 failed to start on Manjaro</title><link>https://rezhajul.io/posts/postgresql-9-4-failed-to-start-on-manjaro/</link><guid isPermaLink="true">https://rezhajul.io/posts/postgresql-9-4-failed-to-start-on-manjaro/</guid><description>PostgreSQL 9.4 failed to start on Manjaro</description><pubDate>Sat, 07 Feb 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;After setting up Manjaro on my new working laptop, I installed PostgreSQL 9.4. When I want to start the service by running &lt;code&gt;sudo systemctl start postgresql&lt;/code&gt;, PostgreSQL kept failing to start. &lt;!--more--&gt;&lt;/p&gt;
&lt;p&gt;Apparently before PostgreSQL can function correctly on Arch Linux based distro, the database cluster must be initialized by the postgres user. To fix this, just run the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo -i -u postgres
initdb --locale en_US.UTF-8 -E UTF8 -D &amp;#39;/var/lib/postgres/data&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>MariaDB error failed to start, errno 12</title><link>https://rezhajul.io/posts/mysql-mariadb-error-failed-to-start-errno-12/</link><guid isPermaLink="true">https://rezhajul.io/posts/mysql-mariadb-error-failed-to-start-errno-12/</guid><description>MariaDB error failed to start, errno 12</description><pubDate>Sun, 25 Jan 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This server hosts about 4 blogs that all run with MariaDB 10. After a few days running, the MariaDB service keeps failing on this micro EC2 instance with error like this. &lt;!--more--&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;120423 09:13:38 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
120423 09:14:27 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
120423  9:14:27 [Note] Plugin &amp;#39;FEDERATED&amp;#39; is disabled.
120423  9:14:27 InnoDB: The InnoDB memory heap is disabled
120423  9:14:27 InnoDB: Mutexes and rw_locks use GCC atomic builtins
120423  9:14:27 InnoDB: Compressed tables use zlib 1.2.3
120423  9:14:27 InnoDB: Using Linux native AIO
120423  9:14:27 InnoDB: Initializing buffer pool, size = 512.0M
InnoDB: mmap(549453824 bytes) failed; errno 12
120423  9:14:27 InnoDB: Completed initialization of buffer pool
120423  9:14:27 InnoDB: Fatal error: cannot allocate memory for the buffer pool
120423  9:14:27 [ERROR] Plugin &amp;#39;InnoDB&amp;#39; init function returned error.
120423  9:14:27 [ERROR] Plugin &amp;#39;InnoDB&amp;#39; registration as a STORAGE ENGINE failed.
120423  9:14:27 [ERROR] Unknown/unsupported storage engine: InnoDB
120423  9:14:27 [ERROR] Aborting
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Error 12 is &lt;code&gt;OS error code:(ENOMEM) Out of memory&lt;/code&gt;, so of course you could just throw more memory at it. Or you could just enable swap so MariaDB doesn&amp;#39;t crash anymore. By default, a micro EC2 instance doesn&amp;#39;t have swap, so we should run a few Linux commands to make one.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo dd if=/dev/zero of=/swaps bs=1M count=1024
sudo mkswap /swaps
sudo swapon /swaps
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Taadaa!! Your MariaDB instance should run flawlessly now. To make your change persist across reboots, add this line &lt;code&gt;/swaps swap swap defaults 0 0&lt;/code&gt; to &lt;code&gt;/etc/fstab&lt;/code&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Vagrant&apos;s hosted Django can&apos;t be accessed</title><link>https://rezhajul.io/posts/vagrants-hosted-django-cant-be-accessed/</link><guid isPermaLink="true">https://rezhajul.io/posts/vagrants-hosted-django-cant-be-accessed/</guid><description>Vagrant&apos;s hosted Django can&apos;t be accessed</description><pubDate>Wed, 14 Jan 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A few days ago I started using Vagrant to develop our Django project at my old company. After I finished installing project&amp;#39;s dependencies and run &lt;code&gt;python manage.py runserver&lt;/code&gt;, I couldn&amp;#39;t access 127.0.0.1:8000 from my browser even after I enabled vagrant&amp;#39;s port forwarding. &lt;!--more--&gt;&lt;/p&gt;
&lt;p&gt;The thing is, Django&amp;#39;s default settings will listen on 127.0.0.1, which can only be accessed from within the Vagrant VM. So we need Django to listen on all network interfaces by binding it to 0.0.0.0. Running &lt;code&gt;python manage.py runserver 0.0.0.0:8000&lt;/code&gt; will fix this problem.&lt;/p&gt;
</content:encoded></item><item><title>Minifying Your JavaScript with Uglify</title><link>https://rezhajul.io/posts/minifying-your-javascript-with-uglify/</link><guid isPermaLink="true">https://rezhajul.io/posts/minifying-your-javascript-with-uglify/</guid><description>Minifying Your JavaScript with Uglify</description><pubDate>Thu, 23 Oct 2014 10:37:49 GMT</pubDate><content:encoded>&lt;p&gt;Do your web visitor a favor with faster website&lt;/p&gt;

&lt;p&gt;Programmer biasanya selalu memperhatikan performa dari kode yang mereka hasilkan. Akan tetapi membuat kode javascript yang berjalan di sisi klien membutuhkan perhatian lebih.&lt;/p&gt;

&lt;p&gt;Browser yang kita gunakan tidak dapat menampilkan apapun hasil pekerjaan kamu jika javascriptnya belum terdownload ke browser. Sungguh sangatlah penting untuk mempercepat javascript terdownload secepat mungkin agar website tidak terasa &lt;em&gt;lemot&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Salah satu hal yang bisa kamu lakukan untuk mempercepat javascript terdownload ke browser adalah dengan memperkecil ukurannya.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;(function() {
    var process = &quot;minification&quot;;
    var tool = &quot;Uglify&quot;;

    function logProcess () {
        console.log(process);
    }

    logProcess();
    // =&amp;gt; &quot;minification&quot;
})();
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Contoh kode javascript ini dapat diperkecil ukurannya menjadi seperti ini&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;!function(){function n(){console.log(i)}var i=&quot;minification&quot;;n()}();
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Walaupun kodenya terlihat sangat berbeda, javascript &lt;em&gt;engine&lt;/em&gt; melakukan hal yang sama. Kode ini diperoleh dengan melakukan beberapa hal seperti :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Menghilangkan whitespace.&lt;/li&gt;
&lt;li&gt;Menghilangkan komentar.&lt;/li&gt;
&lt;li&gt;Mengubah nama variabel dan fungsi menjadi lebih pendek.&lt;/li&gt;
&lt;li&gt;Menggunakan sintaks &lt;a href=&quot;http://analcime.apps.dj/cNwdH&quot;&gt;IIFE&lt;/a&gt; yang lebih &lt;em&gt;concise&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Membuang variabel yang tidak digunakan (variabel tool pada contoh diatas).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ada cukup banyak tools untuk memperkecil ukuran javascript, tetapi yang paling mudah digunakan adalah &lt;a href=&quot;http://analcime.apps.dj/jNGnB&quot;&gt;UglifyJS&lt;/a&gt;. Untuk memulai menggunakannya&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pastikan Node JS sudah terinstall.  &lt;/li&gt;
&lt;li&gt;Jalankan &lt;code&gt;sudo npm install -g uglify-js&lt;/code&gt; di terminal kamu.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Kamu akan dapat menggunakan UglifyJS dengan perintah berikut&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;uglifyjs myfile.js --output myfile.min.js
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Pada setting defaultnya, UglifyJS hanya akan melakukan minification sederhana dengan menghilangkan whitespace dan komentar saja. Untuk hasil lebih baik kamu bisa menambahkan flag ini. &lt;br /&gt;
1. &lt;code&gt;--mangle&lt;/code&gt; akan mengubah nama variabel dan fungsi menjadi lebih pendek &lt;br /&gt;
2. &lt;code&gt;--compress&lt;/code&gt; akan melakukan optimasi dan membuang variabel yang tidak digunakan.&lt;/p&gt;

&lt;p&gt;Menggunakan kedua flag tersebut menghasilkan perintah seperti ini  &lt;/p&gt;

&lt;pre&gt;&lt;code&gt;uglifyjs myfile.js --output myfile.min.js --mangle --compress
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;UglifyJS juga memiliki kemampuan untuk mengecilkan banyak file sekaligus dan menggabungkannya menjadi 1 file dengan perintah berikut.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;uglifyjs folderJS/*.js --output all.min.js --mangle --compress
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Untuk cara penggunaan yang lebih detil, kamu bisa lihat &lt;a href=&quot;http://analcime.apps.dj/PHQRt&quot;&gt;disini&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Sekarang, kamu sudah punya tambahan tool yang solid. Waktunya membuat website yang lebih cepat!&lt;/p&gt;</content:encoded></item><item><title>HTTPS Everywhere!</title><link>https://rezhajul.io/posts/https-everywhere/</link><guid isPermaLink="true">https://rezhajul.io/posts/https-everywhere/</guid><description>HTTPS Everywhere!</description><pubDate>Wed, 15 Oct 2014 07:25:47 GMT</pubDate><content:encoded>&lt;p&gt;and why the internet should do the same thing&lt;/p&gt;

&lt;p&gt;Mulai saat ini, alamat blog saya adalah &lt;a href=&apos;https://blog.rezhajulio.web.id&apos;&gt;https://blog.rezhajulio.web.id&lt;/a&gt;. Ya, ada tambahan &quot;s&quot; pada protokolnya, yang menandakan koneksi kamu ke blog saya (dan juga seluruh subdomain yang berada di domain rezhajulio.web.id) telah terenkripsi. &lt;/p&gt;

&lt;p&gt;![](../../assets/images/2014/10/https.png)&lt;/p&gt;

&lt;p&gt;Tergantung dari browser yang kamu gunakan, kamu akan melihat gambar gembok, yang jika kamu klik akan menampilkan informasi privasi / keamanan.&lt;/p&gt;

&lt;h2 id=&quot;apamaksudnya&quot;&gt;Apa maksudnya ?&lt;/h2&gt;

&lt;p&gt;Pertama, browser kamu benar-benar terhubung dengan blog.rezhajulio.web.id, bukan website lain yang berpura-pura sebagai website saya. Terutama setelah sekumpulan orang &lt;del&gt;pintar&lt;/del&gt; &lt;a href=&quot;http://analcime.apps.dj/yPfha&quot;&gt;memblokir DNS publik&lt;/a&gt; dan melakukan sentralisasi DNS yang &lt;a href=&quot;http://analcime.apps.dj/42fKo&quot;&gt;berpotensi untuk di-&lt;em&gt;hijack&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Kedua, informasi antara browser kamu dan website ini telah terenkripsi cukup kuat, sehingga jika ada yang memata-matai kamu, akan sulit bagi mereka untuk memantau pertukaran data antara website ini dan browser kamu.&lt;/p&gt;

&lt;p&gt;Jika kamu termasuk orang yang sering browsing menggunakan public WiFi, berhati-hatilah setiap akan menginput &lt;em&gt;credentials&lt;/em&gt; pada sebuah website yang tidak terenkripsi. Bisa saja ada orang yang sedang menunggu kombinasi username-password kamu, dan menggunakannya untuk login di website lain seperti social media atau bahkan website keuangan kamu (Saya yakin masih cukup banyak yang menggunakan kombinasi username-password yang sama untuk berbagai website).&lt;/p&gt;

&lt;p&gt;![](../../assets/images/2014/10/Screenshot---151014---20-23-52.png)&lt;/p&gt;

&lt;h2 id=&quot;turningtheswitch&quot;&gt;Turning the switch&lt;/h2&gt;

&lt;p&gt;Apakah kamu memiliki sebuah website ? Mengapa tidak menggunakan SSL ? &lt;em&gt;you’ll be doing the Internet, and the people who use it, a favor.&lt;/em&gt; Dari ratusan juta website di internet saat ini, baru sekitar 2 juta saja yang telah memasang SSL pada website mereka.&lt;/p&gt;

&lt;p&gt;Sederhananya kamu hanya membutuhkan 3 langkah untuk memiliki website terenkripsi. Memiliki web host yang support HTTPS, memiliki sertifikat SSL, dan konfigurasikan website untuk menggunakan sertifikat SSL.&lt;/p&gt;

&lt;p&gt;Sebelum adanya &lt;em&gt;client support&lt;/em&gt; untuk &lt;a href=&quot;http://analcime.apps.dj/3Bo6g&quot;&gt;Server Name Indication&lt;/a&gt;, kamu membutuhkan 1 &lt;em&gt;dedicated&lt;/em&gt; IP untuk setiap website HTTPS dan ini membutuhkan biaya yang lumayan mahal. Dengan adanya Server Name Indication memungkinkan website HTTPS untuk menggunakan IP bersama. Hal ini memungkinkan shared hosting untuk memberikan pelayanan SSL sehingga dapat menekan biaya.&lt;/p&gt;

&lt;h2 id=&quot;akankahenkripsimemperlambatwebsaya&quot;&gt;Akankah enkripsi memperlambat web saya ?&lt;/h2&gt;

&lt;p&gt;Ya, tentu, tapi tidak seperti di awal tahun 1999 dimana tenaga komputasi belum sebesar sekarang. Enkripsi SSL sudah sangat cepat. Berikut adalah &lt;a href=&quot;http://analcime.apps.dj/Adxoz&quot;&gt;problem nyata dari GMail&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;kesimpulan&quot;&gt;Kesimpulan&lt;/h2&gt;

&lt;p&gt;Karena data antara browser dan website telah terenkripsi, maka kemungkinan orang jahat untuk&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mencuri cookie&lt;/li&gt;
&lt;li&gt;Mengintip apa yang kamu lakukan&lt;/li&gt;
&lt;li&gt;Melihat apa yang kamu ketik&lt;/li&gt;
&lt;li&gt;Memanipulasi data yang kamu kirim dan terima&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;akan sangatlah sulit untuk dilakukan. Selain itu juga, Google sangat memprioritaskan HTTPS dan sejak 6 Agustus 2014 kemarin telah memasukkan &lt;a href=&quot;http://analcime.apps.dj/ki6z5&quot;&gt;faktor HTTPS ke dalam ranking signal mereka&lt;/a&gt;. Sehingga website dengan HTTPS akan memiliki tambahan poin dalam rangking hasil pencarian. &lt;em&gt;Big news for website&apos;s SEO, huh ?&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title>Using Git Like A Boss</title><link>https://rezhajul.io/posts/using-git-like-a-boss/</link><guid isPermaLink="true">https://rezhajul.io/posts/using-git-like-a-boss/</guid><description>Using Git Like A Boss</description><pubDate>Thu, 24 Jul 2014 13:17:38 GMT</pubDate><content:encoded>&lt;p&gt;Kalau kamu belum menggunakannya sekarang, kamu &lt;a href=&quot;http://analcime.apps.dj/5p0Oc&quot;&gt;harus mulai menggunakan version control&lt;/a&gt; segera. Saya gak akan membahas topik ini, karena sudah cukup &lt;a href=&quot;http://analcime.apps.dj/Z3j6-&quot;&gt;banyak&lt;/a&gt; &lt;a href=&quot;http://analcime.apps.dj/M9R10&quot;&gt;bahan&lt;/a&gt; &lt;a href=&quot;http://analcime.apps.dj/6W3HE&quot;&gt;bacaan&lt;/a&gt; &lt;a href=&quot;http://analcime.apps.dj/6ZJow&quot;&gt;bagus&lt;/a&gt; yang menjelaskan alasan dibalik statement tersebut.&lt;/p&gt;

&lt;p&gt;Topik yang ingin saya bicarakan kali ini adalah tentang &quot;bossing your Git&quot;, dan mengapa hal ini cukup penting.&lt;/p&gt;

&lt;p&gt;Kalau kamu masih menghindar untuk memahami arsitektur dibalik version control ini, kemumgkinan besar kamu belum mempergunakan git dengan kemampuan sepenuhnya.&lt;/p&gt;

&lt;p&gt;Topik ini bukan tentang &quot;Kalau kamu gak pake CLI, kamu belum melakukannya dengan benar&quot;. &lt;strong&gt;Silahkan saja menggunakan GUI&lt;/strong&gt;, karena saya juga dahulu menggunakan GUI (ketika masih menggunakan version control SVN dengan &lt;a href=&quot;http://analcime.apps.dj/Jl2my&quot;&gt;Tortoise SVN&lt;/a&gt;, dan sekarang menggunakan git). Terkadang saya menggunakan gitk untuk melihat apa yang telah saya kerjakan dalam perspektif yang lebih luas.&lt;/p&gt;

&lt;p&gt;Topik ini adalah apa yang terjadi ketika kamu memencet tombol Sync di GUI kamu. Sehingga ketika terjadi sesuatu yang tidak diduga, kamu dapat memiliki perkiraan apa penyebab permasalahan tersebut dan dapat menyelesaikannya, dan atau bertanya kepada yang lebih tahu.&lt;/p&gt;

&lt;p&gt;Sebagian pengguna version control adalah &lt;em&gt;frontend developer&lt;/em&gt; atau &lt;em&gt;web designer&lt;/em&gt; yang merupakan orang yang sangat &quot;visual&quot; dalam artian senang menggunakan sesuatu yang memiliki tampilan indah. Ini bukan kritik atau larangan, tapi saya tidak melihat alasan untuk tidak memahami version controlnya sendiri. Kamu tidak perlu tahu semuanya, tapi pemahaman sederhana ketika mengeksekusi perintah merupakan kunci untuk menggunakan tools yang kamu gunakan saat ini dengan lebih baik lagi.&lt;/p&gt;

&lt;p&gt;Sebagian besar Git GUI memiliki fungsionalitas seperti commit, browsing log, melihat differensial, dll. Permasalahan dimulai ketika&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kamu mulai bekerja dengan banyak branch, tag, dan remote.&lt;/li&gt;
&lt;li&gt;Tim kamu mulai bekerja dengan berbagai perintah git yang cukup rumit yang tidak tersedia di GUI kamu.&lt;/li&gt;
&lt;li&gt;Kamu atau orang lain membuat kesalahan.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ketika terjadi kecelakaan, recovery oleh aplikasi GUI umumnya kurang baik. Bagi mereka yang tidak memahami version controlnya, mereka menganggap pekerjaan mereka telah hilang dan mulai panik. Jika mereka menggunakan Git atau version control modern lainnya, sangatlah sulit untuk kehilangan hasil yang telah dikerjakan, asalkan sudah pernah di-commit dan index file tidak rusak. &lt;/p&gt;

&lt;p&gt;Menurut saya Git-CLI adalah cara termudah untuk melakukan pekerjaan ini. GUI membungkus beberapa proses dalam 1 tombol, jika salah satu proses tidak berjalan dengan baik, akan sangat sulit untuk merecovery nya. Bagi user awam yang menghindari CLI, ini akan melukai ekspektasi yang mereka harapkan dari aplikasi GUI itu sendiri.&lt;/p&gt;

&lt;p&gt;Git CLI dan GUI sebenarnya sama-sama client Git. Keduanya memproses index file yang sama, dan juga memiliki fitur umum yang sama. Walau pada akhirnya CLI memiliki fitur yang lebih banyak dibandingkan GUI.&lt;/p&gt;

&lt;p&gt;Akan tetapi saya sendiri tidak terkejut ketika banyak orang lebih memilih GUI dan menghindari CLI. Hingga post ini ditulis, sudah ada 146 perintah di Git CLI. Apakah git GUI sudah memiliki perintah sebanyak itu ? Walau kemampuan menggunakan git saya tidak seahli Linus Torvalds sang pencipta Git, saya menimbang diri saya cukup mahir dalam menggunakan git. Setidaknya saya mampu menggunakan 20 perintah diluar kepala, untuk pekerjaan saya sehari-hari. Yang terpenting adalah memahami bagaimana distributed version control, staging, committing, pushing, pulling, merging dan rebasing bekerja, sisanya cukup mudah untuk dipahami.&lt;/p&gt;

&lt;p&gt;Bahan bacaan yang saya anjurkan:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://analcime.apps.dj/Ls3CN&quot;&gt;Pro Git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://analcime.apps.dj/-jIgY&quot;&gt;Git Immersion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://analcime.apps.dj/rDBcY&quot;&gt;Top 10 Git Tutorials for Beginners&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title>Terbaru di Wordpress 3.8</title><link>https://rezhajul.io/posts/terbaru-di-wordpress-3-8/</link><guid isPermaLink="true">https://rezhajul.io/posts/terbaru-di-wordpress-3-8/</guid><description>Terbaru di Wordpress 3.8</description><pubDate>Thu, 24 Jul 2014 10:42:34 GMT</pubDate><content:encoded>&lt;p&gt;Tema default baru dan 8 pilihan tema admin yang cantik&lt;/p&gt;

&lt;p&gt;Kalau kamu belum upgrade wordpress kamu ke 3.7, lebih baik tidak usah, karena sekarang telah hadir wordpress 3.8. Yaaay! :D. Karena upgrade kali ini adalah major version, maka update otomatis tidak otomatis berjalan dan kamu perlu menekan tombol upgrade di wordpress kamu.&lt;/p&gt;

&lt;p&gt;Perubahan versi 3.7 kebanyakan tidak banyak terlihat dari sisi front-end, namun di versi 3.8 kali ini perubahannya sangat terlihat. Admin interface di versi ini telah dirombak total!&lt;/p&gt;

&lt;p&gt;Kalau kamu hanya lihat sekilas, mungkin akan terlihat mirip dengan sebelumnya, tapi kalo kamu perhatikan lebih baik, akan terlihat bahwa ada cukup banyak perubahan, diantaranya:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Banyak menggunakan Flat Design, yang sekarang ini lagi tren.&lt;/li&gt;
&lt;li&gt;Design responsive dalam artian akan tampil dengan baik di ukuran layar yang berbeda-beda. Dan akses melalui smartphone sekarang makin baik.&lt;/li&gt;
&lt;li&gt;Default font menggunakan Open Sans, menjadikan tampilan lebih jernih dan rapi.&lt;/li&gt;
&lt;li&gt;8 warna yang bisa kamu pilih untuk kamu pakai sebagai admin theme.&lt;/li&gt;
&lt;/ul&gt;

&lt;!-- Image removed: colors-1024x559.png (Lost) --&gt;

&lt;p&gt;Tampilannya gw rasakan sedikit lebih cepat. Sebagian besar halaman berukuran 300kb dan beberapa yang cukup besar berukuran 500kb. Menurut gw ukuran ini masih dapat berkurang lagi jika team wordpress merubah animasi yang menggunakan javascript ke animasi menggunakan CSS3.&lt;/p&gt;

&lt;p&gt;Akses dari smartphone dan tablet sudah sangat baik. Menurut hasil percobaan saya, akses langsung dari smartphone untuk mengakses admin wordpress lebih baik hasilnya dibanding menggunakan aplikasi wordpress yang tersedia di Play Store.&lt;/p&gt;

&lt;h3 id=&quot;themebaru&quot;&gt;Theme baru&lt;/h3&gt;

&lt;!-- Image removed: twentyfourteen-1024x408.jpg (Lost) --&gt;

&lt;p&gt;Ya, tepat sekali. Hampir setiap major version, akan selalu ada theme wordpress baru yang hadir secara default. Kali ini hadir theme Twenty Fourteen bersama wordpress 3.8. Theme ini mengusung style magazine dan sudah 100% responsive.  Content wordpress loe dapat dimunculkan dalam bentuk slide ataupun grid. Tema dan manajemen widget nya pun sangat baik dalam theme default wordpress kali ini.&lt;/p&gt;

&lt;p&gt;Perubahan lainnya dari wordpress 3.8 umumnya adalah bugfix, list perubahan seluruhnya dapat kamu lihat di &lt;a href=&quot;http://analcime.apps.dj/6CSjg&quot;&gt;http://codex.wordpress.org/Version_3.8&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title>Fitur baru di PHP 5.6</title><link>https://rezhajul.io/posts/fitur-baru-di-php-5-6/</link><guid isPermaLink="true">https://rezhajul.io/posts/fitur-baru-di-php-5-6/</guid><description>Fitur baru di PHP 5.6</description><pubDate>Thu, 24 Jul 2014 10:29:49 GMT</pubDate><content:encoded>&lt;p&gt;All the new feature is awesome!&lt;/p&gt;

&lt;p&gt;Baru-baru ini, team core PHP baru saja merilis versi terbaru dari PHP, yakni PHP 5.6.0 Beta 1. Saya mencoba untuk mengcompile nya di komputer saya dan mencicipi fitur baru dari PHP 5.6 ini. Ternyata cukup banyak fitur baru di PHP versi ini yang sangat membantu untuk pengembangan aplikasi yang kita buat. Berikut adalah ulasan saya. &lt;br /&gt;
&lt;img src=&quot;https://rezhajulio.web.id/media../../assets/uploads../../assets/images/selection_017.png&quot; alt=&quot;PHP&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;constantscalarexpressions&quot;&gt;Constant Scalar Expressions&lt;/h3&gt;

&lt;p&gt;Sebelum PHP 5.6 deklarasi constant hanya bisa digunakan dengan nilai static. Di PHP 5.6, constant dapat dideklarasikan dengan menggunakan aritmatika dasar ataupun struktur logika dasar.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;&amp;lt;?php
#save as const.php
class Work {
  const SALARY = 2000;
}

var_dump(Work::SALARY);
&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;&amp;lt;?php
#save as const56.php
class Work {
  const BONUS = 500;
  const SALARY = 2000 + self::BONUS;
}

var_dump(Work::SALARY);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Bisa dilihat bahwa pada file &lt;code&gt;const56.php&lt;/code&gt;, constant SALARY dideklarasikan dengan penjumlahan. Hal ini bisa dilakukan di PHP 5.6, sedangkan di PHP 5.5 akan terjadi syntax error.&lt;/p&gt;

&lt;!-- Image removed: selection_018.png (Lost) --&gt;

&lt;h3 id=&quot;argumentunpacking&quot;&gt;Argument Unpacking&lt;/h3&gt;

&lt;p&gt;Ketika kamu menggunakan sintaks terbaru PHP 5.6 yaitu &lt;code&gt;...&lt;/code&gt; (ya, tanda titik tiga kali), kamu bisa mengubah array dan transversable object menjadi argument list. Di bahasa pemrograman Ruby, sintaks ini dikenal sebagai splat operator. Untuk lebih mudahnya lebih baik kita lihat contoh kode berikut:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;&amp;lt;?php
function jumlah($a, $b, $c) {
  return $a + $b + $c;
}

# Berjalan di semua PHP 5
jumlah(1, 2, 3); # Returns 6
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Lalu bagaimana jika angka yang saya miliki berada di dalam sebuah array ?&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;&amp;lt;?php
function jumlah($a, $b, $c) {
  return $a + $b + $c;
}
$number = [1,2,3]

#disini kita menggunakan fungsi call_user_func_array
#berjalan di semua versi PHP 5
call_user_func_array(&apos;jumlah&apos;, $number); # Returns 6

#menggunakan splat operator
#berjalan di PHP 5.6
jumlah(...$number);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Bisa kamu lihat bahwa dengan menggunakan splat operator, kode yang kita miliki terlihat lebih rapi, dan kita tidak perlu memanggil fungsi tambahan dengan nama yang panjang dan jelek :D.&lt;/p&gt;

&lt;h3 id=&quot;variadicfunction&quot;&gt;Variadic Function&lt;/h3&gt;

&lt;p&gt;Untuk kamu yang masih bertanya-tanya apa itu variadic function, artinya adalah sebuah fungsi yang menerima berapapun jumlah variabel yang diberikan. sebelum PHP 5.6, hal ini dimungkinkan dengan menggunakan &lt;code&gt;func_get_args()&lt;/code&gt;, sedangkan di PHP 5.6 menggunakan splat operator (iya, splat operator yang disebutkan di poin di atas). Untuk lebih mudahnya saya akan kembali memberikan contoh menggunakan fungsi jumlah.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;&amp;lt;?php
# di semua versi PHP 5
function jumlah($integer) {
  return $integer + array_sum(array_slice(func_get_args(), 1));
}

# di PHP 5.6
function jumlah(...$integer) {
  return array_sum($integer);
}


jumlah(1); # Returns 1
jumlah(1, 2); #Returns 3
jumlah(1, 2, 3); # Returns 6
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Dengan sintaks baru ini, semua variabel yang diinputkan ke fungsi jumlah akan disimpan kedalam variabel integer dalam bentuk array. Kita tidak perlu lagi menggunakan &lt;code&gt;func_get_args()&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&quot;selamattinggalrawpostdata&quot;&gt;Selamat Tinggal Raw Post Data&lt;/h3&gt;

&lt;p&gt;Sejauh ini, PHP 5.6 telah menghilangkan 2 fitur lama, salah satunya adalah &lt;code&gt;$HTTP_RAW_POST_DATA&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&quot;fileuploadlebihbesar&quot;&gt;File Upload Lebih Besar&lt;/h3&gt;

&lt;p&gt;Sebelum PHP 5.6, upload file lebih dari 2 GB tidak memungkinkan karena PHP masih sangat buruk dalam menghandle dan memprosesnya. Namun, sekarang di PHP 5.6 sudah membuat hal ini menjadi mungkin. Sebagai tambahan, penggunaan memori di POST data sudah berkurang hingga 2 sampai 3 kalinya. Hal ini dikarenakan dihilangkannya raw post data seperti yang telah disebutkan diatas&lt;/p&gt;

&lt;h3 id=&quot;buildindebugger&quot;&gt;Build in Debugger&lt;/h3&gt;

&lt;p&gt;Pada versi ini juga, PHP telah membundle debugger yang bernama PHPDBG. debuggerini dibundle bersama SAPI (Server API) dan dapat digunakan dari command line interface ataupun secara langsung dari code php kamu. Pelajari lebih lanjut tentang PHPDBG &lt;a href=&quot;http://analcime.apps.dj/NqAXn&quot;&gt;disini&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;fiturbaruzip&quot;&gt;Fitur Baru Zip&lt;/h3&gt;

&lt;p&gt;Zip library di PHP juga mengalami penambahan fitur. Salah satu yang paling menarik adalah &lt;code&gt;ZipArchive::setPassword($password)&lt;/code&gt; yang memungkinkan kita untuk memberi password pada file zip.&lt;/p&gt;

&lt;h3 id=&quot;kesimpulan&quot;&gt;Kesimpulan&lt;/h3&gt;

&lt;p&gt;PHP 5.6 hingga saat ini masih belum ditentukan kapan versi stabilnya akan dirilis. Akan tetapi, fitur-fitur terbarunya cukup menjanjikan. Semoga ulasan singkat saya dapat membuat kamu untuk langsung mencoba PHP 5.6 begitu dirilis versi stabilnya. Jika ada kesalahan pada ulasan saya, mohon beritahu saya lewat komentar di bawah ini.&lt;/p&gt;</content:encoded></item><item><title>Hal yang tidak kamu ketahui tentang mongodb</title><link>https://rezhajul.io/posts/hal-yang-tidak-kamu-ketahui-tentang-mongodb/</link><guid isPermaLink="true">https://rezhajul.io/posts/hal-yang-tidak-kamu-ketahui-tentang-mongodb/</guid><description>Hal yang tidak kamu ketahui tentang mongodb</description><pubDate>Thu, 24 Jul 2014 10:15:03 GMT</pubDate><content:encoded>&lt;p&gt;Iya, kamu. #apasih&lt;/p&gt;

&lt;p&gt;MongoDB adalah salah satu database yang lagi ngetrend saat ini. MongoDB ini juga merupakan database yang paling popular di kelas NoSQL Database. Beberapa waktu lalu saya udah mencoba untuk menggunakan database ini dan cukup bagus, namun banyak orang yang masih belum tahu keterbatasan dari database yang satu ini sehingga mereka menganggap keterbatasan database ini sebagai bug. Saya nulis artikel ini agar kamu tahu keterbatasan yang dimiliki oleh MongoDB sehingga kalau kamu menggunakannya ga bikin kamu sakit kepala :D.&lt;/p&gt;

&lt;h3 id=&quot;rakusspace&quot;&gt;Rakus space&lt;/h3&gt;

&lt;p&gt;YA! Pertama kali saya menggunakan MongoDB, database yang satu ini sangat rakus space. Ini berhubungan dengan langkah MongoDB untuk menghindari fragmentasi disk pada database mereka dengan mengalokasikan file berukuran besar. Ketika kamu pertama kali membuat sebuah database di MongoDB, misal kita namakan Rezha.0, database ini akan langsung memakan memori sebesar 64 MB. Sadis kan? Kalau aplikasi yang kamu buat ga memakan database yang besar, ini jelas pemborosan. Ketika aplikasi kamu menggunakan lebih dari setengah space database di file Rezha.0, MongoDB akan langsung membuat file baru bernama Rezha.1 yang berukuran 2 kali lipat dari sebelumnya, yakni 128 MB. Begitu juga ketika Rezha.1 ini telah digunakan lebih dari setengahnya, akan dibuat lagi file Rezha.2 dengan ukuran 512 MB, dan seterusnya hingga file database baru yang dibentuk akan mencapai 2GB per filenya.&lt;/p&gt;

&lt;p&gt;Kalau space adalah salah satu limitasi di proyek kamu, maka kamu harus memikirkan matang-matang sebelum menggunakan database ini. Ada salah satu produk komersial turunan MongoDB yang bernama TokuMX, yang menurut pengakuannya bisa mengurangi penggunaan space hingga 90%.&lt;/p&gt;

&lt;h3 id=&quot;limitasi32bit&quot;&gt;Limitasi 32 bit&lt;/h3&gt;

&lt;p&gt;Versi 32 bit dari MongoDB ini juga dapat dibilang kurang bagus karena memiliki limitasi lain, yaitu hanya mampu menghandle data sebesar 2GB. Sangat nanggung untuk mereka yang akan menggunakan MongoDB untuk skala besar. Solusinya ? Pakai 64 bit!&lt;/p&gt;

&lt;h3 id=&quot;biayakonsultasimahal&quot;&gt;Biaya konsultasi mahal&lt;/h3&gt;

&lt;p&gt;Kalo kamu berniat untuk konsultasi dengan team MongoDB untuk mengatasi problem yg kamu miliki. Mereka &lt;a href=&quot;http://analcime.apps.dj//XPHJn&quot;&gt;mematok harga&lt;/a&gt; cukup mahal, yakni sebesar $ 450 per jamnya, dan kamu minimal harus membayar untuk 2 jam, jadi minimal sebesar $900 atau Rp 10.7 Juta harus kamu  rogoh dari kocek kamu untuk sekali konsultasi.&lt;/p&gt;

&lt;h3 id=&quot;toolsadministrasikurang&quot;&gt;Tools administrasi kurang&lt;/h3&gt;

&lt;p&gt;Kalo kamu udah terbiasa dengan phpmyadmin untuk MySQL, mungkin kamu bakal kecewa kalo menggunakan MongoDB, karena tools yang ada seperti &lt;a href=&quot;http://analcime.apps.dj/qlWRF&quot;&gt;RockMongo&lt;/a&gt; sangat kurang fitur. Mungkin masih bisa sedikit terobati dengan &lt;a href=&quot;http://analcime.apps.dj/qlWRF&quot;&gt;RoboMongo&lt;/a&gt;, namun silahkan coba sendiri untuk pastinya&lt;/p&gt;

&lt;h3 id=&quot;officiallimitations&quot;&gt;Official limitations&lt;/h3&gt;

&lt;p&gt;Yang menyedihkan adalah ga banyak orang yang mencari tahu keterbatasan dari teknologi yang akan mereka adopsi. Staff MongoDB sudah membuat &lt;a href=&quot;http://analcime.apps.dj/G7K5Z&quot;&gt;sebuah halaman&lt;/a&gt; tentang limitasi dari MongoDB ini. Yang saya bahas diatas gak ada di halaman itu tentunya :D. Semoga dengan artikel ini kamu tahu limitasi MongoDB dan ga kaget kalau kamu mau menggunakan database ini.&lt;/p&gt;</content:encoded></item><item><title>Push Notifications untuk Setiap Error di Laravel</title><link>https://rezhajul.io/posts/push-notifications-untuk-setiap-error-di-laravel/</link><guid isPermaLink="true">https://rezhajul.io/posts/push-notifications-untuk-setiap-error-di-laravel/</guid><description>Push Notifications untuk Setiap Error di Laravel</description><pubDate>Thu, 24 Jul 2014 09:17:25 GMT</pubDate><content:encoded>&lt;p&gt;Karena user lebih memilih untuk menutup website daripada melaporkan error&lt;/p&gt;

&lt;p&gt;Saya selalu kepengen tau ketika aplikasi yang saya bikin bermasalah. Ketika aplikasi yang kita buat bermasalah, pasti enak rasanya kalo ada notifikasi yang masuk ke smartphone kita, iya ga ?&lt;/p&gt;

&lt;p&gt;Setting nya cukup mudah, kamu cuma perlu akun gmail dan aplikasi gmail di iOS atau android. Kalau kamu udah punya keduanya dan udah kamu setting supaya bisa dapet notifikasi email di smartphone kamu, maka kamu dah siap. Langkah yang kamu harus lakukan sebenernya mudah, kamu cuma harus buat aplikasi kamu kirim email setiap ada error.&lt;/p&gt;

&lt;p&gt;Disini saya menggunakan fungsi mail() yang udah built-in di PHP karena saya ga butuh sesuatu yg spesial dan saya ga butuh tampilan email yg memukau. Simpel tapi efektif. Info yang saya butuhin cuma log error, IP pengguna dan server, sama URL dimana error terjadi. Untuk itu saya bikin script kayak gini&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;$ip = Request::server(‘REMOTE_ADDR’);
$host = getHostByAddr(Request::server(‘REMOTE_ADDR’));
$server = Request::server(‘HTTP_HOST’);
$url = $host.Request::server(‘REQUEST_URI’);
$message = &quot;Client: $ip ($host)\r\nURL: http://$server\r\n\r\n$exception&quot;;
mail(&quot;you@gmail.com&quot;, &quot;Exception at $server&quot;,$message, &quot;From: $server &amp;lt;you@gmail.com&amp;gt;\r\nContent-Type: text/html;\r\nMime-Version: 1.0&quot;);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Simpel kan ? Untuk Laravel 4 cukup kamu taruh di file &lt;code&gt;app/start/global.php&lt;/code&gt; di bagian &lt;code&gt;App::error&lt;/code&gt;. Di Laravel 3 ke bawah bisa kamu taruh di file &lt;code&gt;application/config/error.php&lt;/code&gt;&lt;/p&gt;</content:encoded></item><item><title>Terbaru di Wordpress 3.7</title><link>https://rezhajul.io/posts/terbaru-di-wordpress-3-7/</link><guid isPermaLink="true">https://rezhajul.io/posts/terbaru-di-wordpress-3-7/</guid><description>Terbaru di Wordpress 3.7</description><pubDate>Thu, 24 Jul 2014 09:06:22 GMT</pubDate><content:encoded>&lt;p&gt;CMS Sejuta Umat&lt;/p&gt;

&lt;p&gt;Saya suka banget sama frekuensi update WordPress. Mereka umumnya dirilis setiap beberapa bulan sehingga kita menerima beberapa fitur baru dan perbaikan bug. Tapi mereka juga tidak begitu sering update sampai kita harus update semua situs kita setiap hari.&lt;/p&gt;

&lt;p&gt;WordPress 3.7 dirilis pada tanggal 24 Oktober 2013. kamu bisa download file dari wordpress.org atau kamu bisa klik link Updates dari dalam kontrol panel WordPress. WordPress menyatakan “you might not notice a thing, and we’re okay with that”.Kamu  mungkin ga akan merasakan perbedaan dari update wordpress kali ini. Mungkin satu-satunya hal yang akan kamu perhatikan adalah WordPress membutuhkan lebih sedikit maintenance dari sebelumnya…&lt;/p&gt;

&lt;h3 id=&quot;latarbelakangpembaruanotomatis&quot;&gt;Latar Belakang Pembaruan Otomatis&lt;/h3&gt;

&lt;p&gt;Saya gak pernah mengalami masalah apapun dengan 1 click upgrade – selalu berhasil. Mulai sekarang , maintenance dan update keamanan yang dilakukan secara otomatis di background. Kamu hanya akan melihat tombol “Upgrade Now” ketika versi 3.8 dirilis.&lt;/p&gt;

&lt;p&gt;Fitur ini mungkin tidak menarik bagi para administrator yang sangat berhati-hati diantara kalian , tetapi tim WordPress telah menguji pada 110.000 situs tanpa kegagalan satupun. Rata-rata , update memakan waktu kurang dari 25 detik dan hanya akan menempatkan WordPress dalam modus maintenance selama beberapa detik .&lt;/p&gt;

&lt;p&gt;Untungnya , mengkonfigurasi dan menonaktifkan upgrade di background bisa dilakukan. Tunggu tutorialnya di blog gw ini.&lt;/p&gt;

&lt;h3 id=&quot;passwordmeterdiperbaharui&quot;&gt;Password Meter diperbaharui&lt;/h3&gt;

&lt;p&gt;Password Meter yang baru sekarang telah mengenal pola umum dari password yang lemah seperti nama , tanggal , urutan Keyboard , urutan nomor dan bahkan referensi dari budaya pop. Silahkan coba password kamu di password meter yang baru ini, dan jika hasilnya menunjukan bahwa password loe sangat lemah, sebaiknya kamu segera ganti password!&lt;/p&gt;

&lt;h3 id=&quot;pencarian&quot;&gt;Pencarian&lt;/h3&gt;

&lt;p&gt;Fasilitas pencarian WordPress telah memadai tapi jarang menghasilkan hasil yang akurat seperti pencarian Google / Bing. WordPress 3.7 meningkatkan pencarian dengan relevansi yang lebih baik, bukan hanya berdasarkan tanggal yang cenderung memprioritaskan posting blog terbaru di atas halaman . Misalnya, istilah pencarian yang sesuai judul akan muncul ke bagian atas daftar. Ini agak sulit untuk mengevaluasi perbaikan yang telah dilakukan kecuali kamu memiliki wordpress 3.6 dan 3.7 dengan konten yang identik , tetapi tes dasar yang saya lakukan tampak lebih baik.&lt;/p&gt;

&lt;p&gt;Jika itu tidak cukup …&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Perbaikan aksesibilitas telah dilakukan&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Codex dan sistem dokumentasi telah diperbarui&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lebih dari 437 bug telah diperbaiki oleh 211 pengembang&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;WordPress 3.7 adalah pembaruan besar. Belum ada perubahan mendasar ke core wordpress, jadi gw menduga kebanyakan plugin dan tema akan kompatibel . Kecuali loe tahu sebaliknya?&lt;/p&gt;

&lt;p&gt;Jika semua berjalan dengan baik , WordPress 3.8 akan dirilis pada akhir 2013 . Kita mungkin melihat dashboard , halaman tema dan fasilitas pencarian baru.&lt;/p&gt;</content:encoded></item><item><title>Kompresi CSS menggunakan Python</title><link>https://rezhajul.io/posts/kompresi-css-menggunakan-python/</link><guid isPermaLink="true">https://rezhajul.io/posts/kompresi-css-menggunakan-python/</guid><description>Kompresi CSS menggunakan Python</description><pubDate>Thu, 24 Jul 2014 05:48:44 GMT</pubDate><content:encoded>&lt;p&gt;Jangan siksa pengunjung website mu dengan bloated CSS&lt;/p&gt;

&lt;p&gt;CSS (Cascading Style Sheet) merupakan salah satu komponen penting dalam sebuah website. Tanpa CSS, sebuah website dapat dipastikan memiliki tampilan yang tidak begitu indah dilihat. Namun ketika sebuah website memiliki komponen yang cukup banyak, ukuran file CSS ini akan menjadi cukup besar sehingga memperlambat akses website kamu. Disini saya akan kasih script kecil yang dapat digunakan untuk memperkecil ukuran css kamu.&lt;/p&gt;

&lt;p&gt;Disini kita butuh sebuah library python bernama cssmin, installasi nya cukup mudah, cukup dengan mengetik &lt;code&gt;pip install cssmin&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Pastikan installasi anda sudah terinstall pip tentunya. Cssmin ini dapat dibilang import dari YUI css compressor yang berbasis java. Untuk memperkecil banyak file css sekaligus, gue akan berikan snippet code nya.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import cssmin, time, glob
#nama file output, disini gue beri nama all_UnixTime.css
#contoh file output all_1376637310.css
outfilename = ‘all_’ + str((int(time.time()))) + &quot;.css&quot;
#membaca setiap file css dalam 1 folder
with open(outfilename, ‘wb’) as outfile:
    for fname in glob.glob(‘*.css’):
        with open(fname, ‘r’) as rawfile:
            #mengecilkan css
            minified_file = cssmin.cssmin(rawfile.read())
            #menulis css yang sudah diperkecil ke file output
            outfile.write(minified_file)
    outfile.write(‘/* ===end of ‘ + fname + ‘===*/\n’)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;simpan dalam sebuah file misalkan css.py, lalu letakkan dalam 1 folder bersama file css anda, lalu eksekusi dengan menjalankan ini di command line loe&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;python css.py
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Untuk yang ingin mempelajari lebih jauh library cssmin, silahkan kunjungi &lt;a href=&quot;http://analcime.apps.dj/JUsqF&quot;&gt;githubnya&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title>Bundled Javascript di Yii Framework</title><link>https://rezhajul.io/posts/bundled-javascript-di-yii-framework/</link><guid isPermaLink="true">https://rezhajul.io/posts/bundled-javascript-di-yii-framework/</guid><description>Bundled Javascript di Yii Framework</description><pubDate>Thu, 24 Jul 2014 05:42:19 GMT</pubDate><content:encoded>&lt;p&gt;Ga perlu include jquery lagi.&lt;/p&gt;

&lt;p&gt;Tahu gak sih kamu kalau Yii Framework telah menyediakan beberapa file JavaScript di dalam paketnya? Jadi kalau kamu ingin menyertakan sebuah file JavaScript ke dalam projek Web, kamu mungkin bisa cek dulu apakah Yii sudah punya JavaScript itu atau belum. Jika sudah ada, maka kamu gak perlu download dan manual meng-include JavaScript tersebut lagi, tetapi cukup memanggil fungsi di Yii saja.&lt;/p&gt;

&lt;p&gt;Yii mempunyai beberapa JavaScript berguna yang siap dipakai di dalam sistemnya. Contohnya Jquery dan JQuery UI. Misalnya kamu perlu menggunakan Jquery UI, maka kamu cukup menulis sintaks di bawah untuk meng-include-nya&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;Yii::app()-&amp;gt;clientScript-&amp;gt;registerCoreScript(‘jquery.ui’);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Untuk mengetahui Yii memiliki script inti apa saja, kita bisa mengecek di folder Yii Framework sendiri, tepatnya di framework/web/js/packages.php. Berikut beberapa file JavaScript yang terdapat di dalam Yii Framework:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;jquery: Ini merupakan file jQuery yang terkenal itu. Secara default, Yii akan menggunakan file jQuery yang sudah terkompres. Hanya pada saat kondisi debug, Yii meng-include file jQuery biasa.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;multifile: Berfungsi sebagai upload beberapa file. Multifile di Yii menggunakan jQuery plugin multiple file upload dari Fynework.com. Biasanya kita tidak akan meng-include manual file ini, karena JavaScript ini dipakai CMultiFileUpload.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;cookie: Untuk mengatur cookie di jQuery. File JavaScript ini diambil dari jQuery-Cookie Klaus Hartl.&lt;/p&gt;

&lt;p&gt;jquery.ui: jQuery User Interface memiliki segudang fungsi UI seperti kalender, fungsi drag dan drop, fungsi autocomplete dan lain-lain yang berbasis jQuery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;metadata: Salah satu plugin jQuery. Fungsinya mengambil nilai metadata dari sembarang atribut ataupun kelas. Jika ingin tahu lebih lanjut silahkan kunjungi situs jQueryMetadata.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sebenarnya script-script inti ini ada, dikarenakan Yii sendiri juga memerlukannya. Misalnya ada sebuah script inti yang bernama rating yang sebetulnya digunakan oleh CStarRating. Tetapi kita bisa saja include sendiri karena memerlukannya untuk kasus tertentu. Biasanya saya pribadi pada umumnya hanya membutuhkan jquery dan jquery.ui. Kalau kamu sendiri bagaimana?&lt;/p&gt;</content:encoded></item><item><title>Menggunakan Hashing API untuk Hash Password di PHP 5.5</title><link>https://rezhajul.io/posts/menggunakan-hashing-api-untuk-hash-password-di-php-5-5/</link><guid isPermaLink="true">https://rezhajul.io/posts/menggunakan-hashing-api-untuk-hash-password-di-php-5-5/</guid><description>Menggunakan Hashing API untuk Hash Password di PHP 5.5</description><pubDate>Thu, 24 Jul 2014 03:12:15 GMT</pubDate><content:encoded>&lt;p&gt;Hari gini masih pake MD5 ?&lt;/p&gt;

&lt;p&gt;Menggunakan bcrypt adalah cara terbaik untuk hashing password, tetapi sejumlah besar developer masih menggunakan algoritma yang lebih tua dan lebih lemah seperti MD5 dan SHA1. Banyak developer PHP bahkan tidak menggunakan salt saat hashing. API hashing baru di PHP 5.5 bertujuan untuk menarik perhatian terhadap bcrypt sementara menyembunyikan kompleksitasnya. Pada artikel kali ini saya akan membahas dasar-dasar penggunaan API hashing baru PHP.&lt;/p&gt;

&lt;p&gt;Hashing API baru memiliki 4 fungsi sederhana:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;password_hash()&lt;/code&gt; – digunakan untuk hashing password.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;password_verify()&lt;/code&gt; – digunakan untuk memverifikasi password terhadap hash nya.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;password_needs_rehash()&lt;/code&gt; – digunakan ketika password perlu di hash ulang.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;password_get_info()&lt;/code&gt; – memberikan info tentang algoritma hasing yang digunakan serta option yang digunakan ketika hashing sebuah password&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
  &lt;p&gt;Sejumlah besar developer PHP masih menggunakan algoritma hashing yang sangat tua dan lemah, seperti MD5 dan SHA1.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;hr /&gt;

&lt;h3 id=&quot;password_hash&quot;&gt;&lt;code&gt;password_hash()&lt;/code&gt;&lt;/h3&gt;

&lt;p&gt;Meskipun fungsi &lt;code&gt;crypt()&lt;/code&gt; aman, telah dianggap oleh banyak programmer bahwa fungsi ini terlalu rumit dan rawan kesalahan. Beberapa developer kemudian menggunakan salt dan algoritma yang lemah untuk menghasilkan hash dari sebuah password, misalnya:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;&amp;lt;?php
$hash = md5($password . $salt);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Tetapi fungsi password_hash() dapat membuat kode kamu lebih aman. Ketika kamu membutuhkan sebuah password untuk di hash, tinggal masukkan password ke dalam fungsi dan kamu akan mendapatkan hash yang selanjutnya disimpan ke database.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;&amp;lt;?php
$hash = password_hash($password, PASSWORD_DEFAULT);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Mudah kan! Parameter pertama adalah string password yang perlu hash dan parameter kedua menentukan algoritma yang harus digunakan untuk menghasilkan hash.&lt;/p&gt;

&lt;p&gt;Algoritma standar dari fungsi tersebut adalah bcrypt, tetapi algoritma yang lebih kuat dapat ditambahkan sebagai default di masa yang akan datang dan dapat menghasilkan string yang lebih besar. Jika kamu menggunakan &lt;code&gt;PASSWORD_DEFAULT&lt;/code&gt; dalam proyek kamu, pastikan untuk menyimpan hash dalam kolom yang memiliki kapasitas lebih dari 60 karakter. Mengatur kapasitas menjadi 255 karakter mungkin menjadi pilihan yang baik. Kamu juga bisa menggunakan &lt;code&gt;PASSWORD_BCRYPT&lt;/code&gt; sebagai parameter kedua. Dalam hal ini hasilnya akan selalu 60 karakter.&lt;/p&gt;

&lt;p&gt;Yang penting di sini adalah bahwa kamu tidak harus memberikan salt atau parameter cost. API baru ini akan mengurus semua itu untuk kamu. Dan salt merupakan bagian dari hash, sehingga kamu tidak harus menyimpannya secara terpisah. Jika kamu ingin memberikan salt atau cost kamu sendiri, kamu dapat melakukannya dengan memberikan argumen ketiga untuk fungsi.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;&amp;lt;?php
$options = [&apos;salt&apos; =&amp;gt; fungsi_salt(), //buat fungsi untuk generate salt
&apos;cost&apos; =&amp;gt; 12 // default option ini adalah 10
]
$hash = password_hash($password, PASSWORD_DEFAULT, $options);
&lt;/code&gt;&lt;/pre&gt;

&lt;hr /&gt;

&lt;h3 id=&quot;password_verify&quot;&gt;&lt;code&gt;password_verify()&lt;/code&gt;&lt;/h3&gt;

&lt;p&gt;Sekarang kamu sudah melihat bagaimana untuk menghasilkan hash dengan API baru, mari kita lihat bagaimana untuk memverifikasi password. Ingat bahwa kamu menyimpan hash di database, tapi ketika pengguna log in , kamu mendapatkan password polos. Fungsi password_verify() mengambil password polos dan string hash sebagai dua argumen. Ia mengembalikan nilai true jika hash cocok dengan password pasangannya . Contoh:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-php&quot;&gt;&amp;lt;?php
if (password_verify($password, $hash)) {
// Sukses!
}
else {
// Password salah!
}
&lt;/code&gt;&lt;/pre&gt;

&lt;hr /&gt;

&lt;h3 id=&quot;password_needs_rehash&quot;&gt;&lt;code&gt;password_needs_rehash()&lt;/code&gt;&lt;/h3&gt;

&lt;p&gt;Bagaimana jika kamu perlu mengubah parameter salt dan cost untuk string hash? Ini mungkin terjadi karena kamu  memutuskan untuk meningkatkan keamanan dengan menambahkan salt yang lebih kuat atau parameter cost yang lebih besar. Selain itu, PHP dapat mengubah implementasi standar dari algoritma hashing. Dalam semua kasus ini, kamu akan ingin mengulangi hashing password yang ada.&lt;/p&gt;

&lt;hr /&gt;

&lt;h3 id=&quot;password_get_info&quot;&gt;&lt;code&gt;password_get_info()&lt;/code&gt;&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;password_get_info()&lt;/code&gt; menerima hash dan mengembalikan sebuah array asosiatif yang terdiri dari tiga unsur:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;algo - konstanta yang mengidentifikasi algoritma tertentu&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;algoName - nama algoritma yang digunakan&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Options - berbagai pilihan yang digunakan saat menghasilkan hash&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kesimpulan&lt;/p&gt;

&lt;p&gt;API baru untuk hashing password ini lebih mudah untuk dipakai daripada meraba-raba fungsi &lt;code&gt;crypt()&lt;/code&gt;. Jika situs Web kamu saat ini berjalan pada PHP 5.5, maka saya sangat menyarankan agar kamu menggunakan API hashing baru. Kalau kamu menggunakan PHP 5.3.7 (atau yang lebih baru) dapat menggunakan library &lt;a href=&quot;http://analcime.apps.dj/qE9oI&quot;&gt;password_compat&lt;/a&gt; yang mengemulasi API baru ini dan secara otomatis menonaktifkan dirinya sendiri ketika versi PHP di upgrade ke 5.5.&lt;/p&gt;</content:encoded></item></channel></rss>