<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Code-Slapping]]></title><description><![CDATA[A blog recording my dev life and thoughts like non-stop kitty baps]]></description><link>https://www.codeslapping.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 04:05:49 GMT</lastBuildDate><atom:link href="https://www.codeslapping.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Why HikariCP is the Default Choice: Analysis and Optimization Guide]]></title><description><![CDATA[In Spring Boot, HikariCP has been the default connection pool since version 2.0. As a backend developer, understanding "why" we use it and how to tune it for production is crucial for building scalable systems.
The Spring documentation explicitly sta...]]></description><link>https://www.codeslapping.com/why-hikaricp-is-the-default-choice-analysis-and-optimization-guide</link><guid isPermaLink="true">https://www.codeslapping.com/why-hikaricp-is-the-default-choice-analysis-and-optimization-guide</guid><category><![CDATA[hikari]]></category><category><![CDATA[hikaricp]]></category><dc:creator><![CDATA[GyungJae Ham]]></dc:creator><pubDate>Wed, 07 Jan 2026 02:04:43 GMT</pubDate><content:encoded><![CDATA[<p>In Spring Boot, HikariCP has been the default connection pool since version 2.0. As a backend developer, understanding "why" we use it and how to tune it for production is crucial for building scalable systems.</p>
<p>The Spring documentation explicitly states the preference order for connection pooling:</p>
<ol>
<li><p><strong>HikariCP</strong>: Preferred for its performance and concurrency. If available, it is always chosen.</p>
</li>
<li><p><strong>Tomcat Pooling</strong>: The second choice if HikariCP is unavailable.</p>
</li>
<li><p><strong>Commons DBCP2</strong>: The third alternative.</p>
</li>
<li><p><strong>Oracle UCP</strong>: Used as a last resort.</p>
</li>
</ol>
<p>Let’s dive into why HikariCP is considered the "gold standard" and how we can optimize it for live environments.</p>
<hr />
<h2 id="heading-why-use-a-connection-pool-dbcp-at-all">Why Use a Connection Pool (DBCP) at All?</h2>
<p>In raw JDBC, connecting to a database is an expensive operation. Every user request would require:</p>
<ol>
<li><p>Loading the DB driver.</p>
</li>
<li><p>Establishing a <strong>TCP/IP connection</strong> (the infamous 3-way handshake).</p>
</li>
<li><p>Authentication (sending ID/PW) and creating a DB session.</p>
</li>
<li><p>Returning the connection object to the client.</p>
</li>
</ol>
<p>Doing this for every request is extremely inefficient. A Connection Pool pre-allocates these connections so the application can simply "borrow" and "return" them, bypassing the heavy handshake and initialization overhead.</p>
<hr />
<h2 id="heading-performance-analysis">Performance Analysis</h2>
<p>The benchmark results from the HikariCP team show a staggering difference:</p>
<ul>
<li><p><strong>Connection Cycle (ops/ms):</strong> HikariCP handles ~50,000 operations per millisecond, while its closest competitor (Vibur) manages only about 1/10th of that.</p>
</li>
<li><p><strong>Statement Cycle (ops/ms):</strong> The gap remains just as wide, proving HikariCP's dominance in executing SQL statements.</p>
</li>
</ul>
<h3 id="heading-the-secret-sauce-how-hikaricp-achieves-this">The Secret Sauce: How HikariCP Achieves This</h3>
<p>HikariCP goes "down the rabbit hole" with low-level optimizations:</p>
<ol>
<li><p><strong>Bytecode-level Engineering</strong>: The team studied JIT (Just-In-Time) compiler assembly output to ensure critical routines stay under the "Inline Threshold." By making methods inlineable, they eliminate the overhead of method calls.</p>
</li>
<li><p><strong>CPU Cache Optimization</strong>: They optimized the code to fit within L1/L2 caches. By minimizing instructions, they ensure tasks complete within the OS scheduler's time slice, avoiding the performance hit of a "cache miss" when a thread is rescheduled to a different core.</p>
</li>
<li><p><strong>Custom Collections (FastList)</strong>: They replaced <code>ArrayList</code> with a custom <code>FastList</code>.</p>
<ul>
<li><p><strong>Removed Range Checks</strong>: It skips index validation (e.g., checking if the index exceeds array size) to save cycles.</p>
</li>
<li><p><strong>Optimized Scans</strong>: It performs removal scans from head to tail, specifically optimized for the way connection pools access statements.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-mysql-performance-optimization-options">MySQL Performance Optimization Options</h2>
<p>The HikariCP team suggests several <code>data-source-properties</code> to maximize MySQL throughput:</p>
<h3 id="heading-1-preparedstatement-caching">1. PreparedStatement Caching</h3>
<ul>
<li><p><code>cachePrepStmts=true</code>: Enables client-side caching of <code>PreparedStatement</code> objects.</p>
</li>
<li><p><code>prepStmtCacheSize=250</code>: Number of statements to cache per connection. (Default: 25, Recommended: 250–500).</p>
</li>
<li><p><code>prepStmtCacheSqlLimit=2048</code>: Maximum length of a SQL string to cache. Essential for long ORM-generated queries.</p>
</li>
</ul>
<h3 id="heading-2-server-side-amp-protocol-optimization">2. Server-Side &amp; Protocol Optimization</h3>
<ul>
<li><p><code>useServerPrepStmts=true</code>: Instead of sending full SQL strings, it sends a template to the server and only passes parameters thereafter. This reduces network traffic and allows the DB to reuse execution plans.</p>
</li>
<li><p><code>rewriteBatchedStatements=true</code>: Optimizes bulk INSERT/UPDATE operations.</p>
</li>
<li><p><code>useLocalSessionState=true</code>: Tracks session state locally to avoid unnecessary round-trips to the server.</p>
</li>
</ul>
<h3 id="heading-3-metadata-amp-stats">3. Metadata &amp; Stats</h3>
<ul>
<li><p><code>elideSetAutoCommits=true</code>: Eliminates redundant <code>setAutoCommit</code> calls.</p>
</li>
<li><p><code>maintainTimeStats=false</code>: Disables internal timing metrics to reduce overhead.</p>
<ul>
<li><em>Note: While this boosts performance, it makes troubleshooting harder as you lose metrics like connection acquisition time.</em></li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-recommended-configuration-for-production">Recommended Configuration for Production</h2>
<h3 id="heading-database-level-reference">Database Level (Reference)</h3>
<p>Ensure your DB server is configured to handle the pool size.</p>
<p><strong>MySQL:</strong></p>
<pre><code class="lang-sql">max_connections = 1000
innodb_buffer_pool_size = 4G
wait_timeout = 28800
</code></pre>
<h3 id="heading-spring-boot-applicationyml">Spring Boot <code>application.yml</code></h3>
<p>Here is a production-ready configuration focused on performance and stability.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">spring:</span>
  <span class="hljs-attr">datasource:</span>
    <span class="hljs-attr">url:</span> <span class="hljs-string">jdbc:mysql://db-url:3306/dbname?rewriteBatchedStatements=true&amp;characterEncoding=UTF-8&amp;serverTimezone=Asia/Seoul&amp;useSSL=true</span>
    <span class="hljs-attr">username:</span> <span class="hljs-string">${DB_USERNAME}</span>
    <span class="hljs-attr">password:</span> <span class="hljs-string">${DB_PASSWORD}</span>
    <span class="hljs-attr">driver-class-name:</span> <span class="hljs-string">com.mysql.cj.jdbc.Driver</span>
    <span class="hljs-attr">hikari:</span>
      <span class="hljs-attr">pool-name:</span> <span class="hljs-string">HikariCP-Primary</span>
      <span class="hljs-comment"># Sizing: (core_count * 2) + effective_spindle_count</span>
      <span class="hljs-attr">maximum-pool-size:</span> <span class="hljs-number">10</span>  
      <span class="hljs-attr">minimum-idle:</span> <span class="hljs-number">5</span>
      <span class="hljs-attr">idle-timeout:</span> <span class="hljs-number">300000</span>     <span class="hljs-comment"># 5 mins</span>
      <span class="hljs-attr">connection-timeout:</span> <span class="hljs-number">5000</span> <span class="hljs-comment"># 5 secs (Fail fast)</span>
      <span class="hljs-attr">max-lifetime:</span> <span class="hljs-number">1200000</span>    <span class="hljs-comment"># 20 mins (Must be shorter than DB wait_timeout)</span>
      <span class="hljs-attr">auto-commit:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">data-source-properties:</span>
        <span class="hljs-attr">cachePrepStmts:</span> <span class="hljs-literal">true</span>
        <span class="hljs-attr">prepStmtCacheSize:</span> <span class="hljs-number">250</span>
        <span class="hljs-attr">prepStmtCacheSqlLimit:</span> <span class="hljs-number">2048</span>
        <span class="hljs-attr">useServerPrepStmts:</span> <span class="hljs-literal">true</span>
        <span class="hljs-attr">useLocalSessionState:</span> <span class="hljs-literal">true</span>
        <span class="hljs-attr">rewriteBatchedStatements:</span> <span class="hljs-literal">true</span>
        <span class="hljs-attr">cacheResultSetMetadata:</span> <span class="hljs-literal">true</span>
        <span class="hljs-attr">cacheServerConfiguration:</span> <span class="hljs-literal">true</span>
        <span class="hljs-attr">elideSetAutoCommits:</span> <span class="hljs-literal">true</span>
        <span class="hljs-attr">maintainTimeStats:</span> <span class="hljs-literal">false</span>

  <span class="hljs-attr">jpa:</span>
    <span class="hljs-attr">database-platform:</span> <span class="hljs-string">org.hibernate.dialect.MySQLDialect</span>
    <span class="hljs-attr">hibernate:</span>
      <span class="hljs-attr">ddl-auto:</span> <span class="hljs-string">validate</span>
    <span class="hljs-attr">properties:</span>
      <span class="hljs-attr">hibernate:</span>
        <span class="hljs-attr">show_sql:</span> <span class="hljs-literal">false</span>
        <span class="hljs-attr">format_sql:</span> <span class="hljs-literal">false</span>
        <span class="hljs-attr">jdbc:</span>
          <span class="hljs-attr">batch_size:</span> <span class="hljs-number">100</span>
          <span class="hljs-attr">order_inserts:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">order_updates:</span> <span class="hljs-literal">true</span>
        <span class="hljs-attr">query:</span>
          <span class="hljs-attr">in_clause_parameter_padding:</span> <span class="hljs-literal">true</span>
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>HikariCP isn't just "fast"; it's engineered at the bytecode level to respect CPU architecture. By combining its lean design with the proper MySQL driver properties, you can significantly reduce latency in your backend services.</p>
]]></content:encoded></item><item><title><![CDATA[Deep Dive into ObjectMapper: From Internal Mechanics to Exception Handling]]></title><description><![CDATA[try {
    objectMapper.readValue(request, A::class.java)
} catch (e: JsonProcessingException) {
    throw CoreException(InMemoryExceptionCode.FAILED_PARSE_JSON)
} catch (e: JsonMappingException) {
    throw CoreException(InMemoryExceptionCode.FAILED_...]]></description><link>https://www.codeslapping.com/deep-dive-into-objectmapper-from-internal-mechanics-to-exception-handling</link><guid isPermaLink="true">https://www.codeslapping.com/deep-dive-into-objectmapper-from-internal-mechanics-to-exception-handling</guid><category><![CDATA[objectmapper]]></category><dc:creator><![CDATA[GyungJae Ham]]></dc:creator><pubDate>Wed, 07 Jan 2026 01:57:19 GMT</pubDate><content:encoded><![CDATA[<pre><code class="lang-java"><span class="hljs-keyword">try</span> {
    objectMapper.readValue(request, A::class.java)
} <span class="hljs-keyword">catch</span> (e: JsonProcessingException) {
    <span class="hljs-keyword">throw</span> CoreException(InMemoryExceptionCode.FAILED_PARSE_JSON)
} <span class="hljs-keyword">catch</span> (e: JsonMappingException) {
    <span class="hljs-keyword">throw</span> CoreException(InMemoryExceptionCode.FAILED_MAP_TO_SCHEMA)
}
</code></pre>
<p>As a web backend developer, you quickly realize how frequently <code>ObjectMapper</code> is used. In a typical Controller, the Jackson library's <code>ObjectMapper</code> handles the serialization of JSON requests into our desired object types. It is also common practice for developers to inject <code>ObjectMapper</code> to handle data when interacting with Redis.</p>
<p>To be honest, I didn't find the code above strange at first. In fact, based on my experience, I thought writing it this way was the only way to perfectly control the frequent exceptions thrown by <code>ObjectMapper</code>. This assumption likely stemmed from not looking deeply enough into the internal libraries that Spring relies on.</p>
<hr />
<h2 id="heading-how-objectmapper-operates-in-a-controller">How ObjectMapper Operates in a Controller</h2>
<ol>
<li><p><strong>Identify Content-Type</strong>: When an HTTP request arrives, the server checks the <code>Content-Type</code>.</p>
</li>
<li><p><strong>Trigger Jackson</strong>: If it is <code>application/json</code>, Jackson’s <code>ObjectMapper</code> is invoked.</p>
</li>
<li><p><strong>Deserialization</strong>: <code>ObjectMapper</code> converts the JSON string into an instance of the specified class.</p>
</li>
</ol>
<blockquote>
<p><strong>Serialization</strong> &gt; - The process of converting an object in memory into a format that can be stored or transmitted.</p>
<ul>
<li>Converting Java/Kotlin objects into JSON strings or byte streams.</li>
</ul>
<p><strong>Deserialization</strong> &gt; - The process of converting stored or transmitted data back into an object that can be used in memory.</p>
<ul>
<li>Converting JSON strings back into Java/Kotlin objects.</li>
</ul>
</blockquote>
<p>A quick question: Does <code>ObjectMapper</code> also work when communicating with a Database?</p>
<ul>
<li><p><strong>No.</strong> <code>ObjectMapper</code> is primarily responsible for converting JSON objects during HTTP communication.</p>
</li>
<li><p>When communicating with a DB, <strong>JPA</strong> maps objects (Entities) based on table metadata, while <strong>MyBatis</strong> maps SQL results to objects.</p>
</li>
</ul>
<p>So far, we know <code>ObjectMapper</code> deserializes JSON strings into objects. However, data actually arrives at the server as a <strong>byte stream</strong>, not a raw JSON string. Let’s look at where that transformation happens.</p>
<blockquote>
<ol>
<li><p><strong>Client sends JSON</strong>: <code>{"name": "Kim", "age": 25}</code></p>
</li>
<li><p><strong>Network Transmission</strong>:</p>
<ul>
<li><p>The HTTP request is converted into a byte stream.</p>
</li>
<li><p><code>Content-Type: application/json</code> is included in the header.</p>
</li>
</ul>
</li>
<li><p><strong>Spring Server</strong>:</p>
<ul>
<li><p>Byte stream → Converted to JSON string.</p>
</li>
<li><p><code>ObjectMapper</code> converts the JSON string → Kotlin/Java object.</p>
</li>
</ul>
</li>
</ol>
</blockquote>
<hr />
<h2 id="heading-who-converts-byte-streams-to-json">Who Converts Byte Streams to JSON?</h2>
<p>In Spring MVC, the <code>HttpMessageConverter</code> is responsible for converting the HTTP request byte stream into a JSON string.</p>
<ol>
<li><p><strong>Arrival</strong>: The HTTP request byte stream arrives.</p>
</li>
<li><p><strong>Read</strong>: The byte stream is read via <code>ServletInputStream</code>.</p>
</li>
<li><p><strong>Process</strong>: <code>MappingJackson2HttpMessageConverter</code> takes over.</p>
<ul>
<li><p>It uses <code>ObjectMapper</code> internally.</p>
</li>
<li><p>It converts the byte stream into a string using <code>InputStreamReader</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Convert</strong>: <code>ObjectMapper</code> converts the JSON string into an object.</p>
</li>
</ol>
<p>Looking at Spring's default configuration, we can see where converters are added in the WebMvc configuration classes.</p>
<pre><code class="lang-java"><span class="hljs-comment">// Inside WebMvcConfigurationSupport</span>
<span class="hljs-keyword">if</span> (jackson2XmlPresent) {
    Jackson2ObjectMapperBuilder builder = Jackson2ObjectMapperBuilder.xml();
    <span class="hljs-keyword">if</span> (<span class="hljs-keyword">this</span>.applicationContext != <span class="hljs-keyword">null</span>) {
        builder.applicationContext(<span class="hljs-keyword">this</span>.applicationContext);
    }
    messageConverters.add(<span class="hljs-keyword">new</span> MappingJackson2XmlHttpMessageConverter(builder.build()));
    <span class="hljs-comment">// ...</span>
}
</code></pre>
<p>If you examine <code>MappingJackson2HttpMessageConverter</code>, it inherits from <code>AbstractJackson2HttpMessageConverter</code>. Inside that class, you can see the part where it receives the byte stream through an <code>InputStream</code> object.</p>
<pre><code class="lang-java"><span class="hljs-function"><span class="hljs-keyword">private</span> Object <span class="hljs-title">readJavaType</span><span class="hljs-params">(JavaType javaType, HttpInputMessage inputMessage)</span> <span class="hljs-keyword">throws</span> IOException </span>{
    <span class="hljs-comment">// ...</span>
    <span class="hljs-keyword">try</span> {
        InputStream inputStream = StreamUtils.nonClosing(inputMessage.getBody());
        <span class="hljs-comment">// ...</span>
</code></pre>
<h3 id="heading-end-to-end-flow-summary">End-to-End Flow Summary</h3>
<ul>
<li><p><strong>Initial HTTP Processing</strong>: Request arrives → Tomcat Connector assigns a thread → Request parsed into <code>HttpServletRequest</code>.</p>
</li>
<li><p><strong>FilterChain</strong>: Passes through <code>DelegateFilter</code> → <code>SecurityFilter</code> → etc.</p>
</li>
<li><p><strong>DispatcherServlet</strong>: <code>doDispatch()</code> is called → Find Handler via <code>HandlerMapping</code> → Execute via <code>HandlerAdapter</code>.</p>
</li>
<li><p><strong>@RequestBody Processing</strong>: Read body via <code>ServletInputStream</code> → <code>HttpMessageConverter</code> (Byte Stream → JSON → Object).</p>
</li>
<li><p><strong>@ResponseBody Processing</strong>: Return value processed by <code>HttpMessageConverter</code> (Object → JSON → Byte Stream) → Response written via <code>ServletOutputStream</code>.</p>
</li>
</ul>
<hr />
<h2 id="heading-moving-beyond-try-catch-objectmapper-configurations">Moving Beyond Try-Catch: ObjectMapper Configurations</h2>
<p>Let's analyze the exceptions handled in the original code:</p>
<ol>
<li><p><code>JsonProcessingException</code>: The root exception for Jackson. It covers all issues during JSON parsing or generation (syntax errors, incomplete strings). Since these are often "human errors" from the client side, we might still need some level of control here.</p>
</li>
<li><p><code>JsonMappingException</code>: A subclass of <code>JsonProcessingException</code> for specific mapping issues (type mismatch, missing fields).</p>
<ul>
<li><em>Realization</em>: My original code caught <code>JsonProcessingException</code> first. Since it's the parent, <code>JsonMappingException</code> would never be caught in its own block. I should have analyzed the library hierarchy more carefully.</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-recommended-objectmapper-configurations">Recommended ObjectMapper Configurations</h2>
<p>Instead of messy try-catches, we can configure <code>ObjectMapper</code> to handle many common issues gracefully.</p>
<h3 id="heading-1-basic-amp-deserialization-settings">1. Basic &amp; Deserialization Settings</h3>
<pre><code class="lang-kotlin">objectMapper.apply {
    setSerializationInclusion(JsonInclude.Include.NON_NULL) <span class="hljs-comment">// Exclude nulls</span>
    configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, <span class="hljs-literal">false</span>) <span class="hljs-comment">// Ignore unknown fields</span>
    configure(DeserializationFeature.READ_UNKNOWN_ENUM_VALUES_AS_NULL, <span class="hljs-literal">true</span>) <span class="hljs-comment">// Unknown Enum as null</span>
    registerModule(JavaTimeModule()) <span class="hljs-comment">// Support Java 8 Date/Time</span>
}
</code></pre>
<h3 id="heading-2-custom-wrapper-for-clean-code">2. Custom Wrapper for Clean Code</h3>
<p>Since we cannot completely eliminate <code>JsonProcessingException</code>, I recommend using a Wrapper class:</p>
<pre><code class="lang-kotlin"><span class="hljs-meta">@Component</span>
<span class="hljs-meta">@Slf4j</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">JsonConverter</span></span>(<span class="hljs-keyword">private</span> <span class="hljs-keyword">val</span> objectMapper: ObjectMapper) {
    <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-type">&lt;T&gt;</span> <span class="hljs-title">fromJson</span><span class="hljs-params">(json: <span class="hljs-type">String</span>, type: <span class="hljs-type">Class</span>&lt;<span class="hljs-type">T</span>&gt;)</span></span>: Optional&lt;T&gt; {
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">try</span> {
            Optional.ofNullable(objectMapper.readValue(json, type))
        } <span class="hljs-keyword">catch</span> (e: JsonProcessingException) {
            log.error(<span class="hljs-string">"JSON Conversion Failed: {}"</span>, e.getMessage())
            Optional.empty()
        }
    }
}
</code></pre>
<hr />
<h2 id="heading-supplemental-issues-with-mappingjackson2httpmessageconverter">Supplemental: Issues with MappingJackson2HttpMessageConverter</h2>
<p>I’d like to share an issue discussed in my dev community regarding <code>MappingJackson2HttpMessageConverter</code>.</p>
<p><strong>The Problem</strong>: When communicating with an external partner API using <code>WebClient</code> or <code>RestClient</code>, an error occurred stating the request body was empty. Interestingly, it worked fine with <code>OpenFeign</code> or when sending data as a raw <code>String</code>.</p>
<p><strong>The Cause: Chunked Transfer Encoding</strong>. When you pass an <code>Object</code> directly to the request body, <code>MappingJackson2HttpMessageConverter</code> triggers. In Spring 6.1+, to optimize memory, <code>RestTemplate</code> and <code>RestClient</code> no longer buffer the request body by default. Consequently, the <code>Content-Length</code> header is not set, and the data is sent using <code>Transfer-Encoding: chunked</code>.</p>
<p>If the external partner's server does not support chunked encoding, it fails.</p>
<p><strong>Solutions</strong>:</p>
<ol>
<li><p>Override <code>getContentLength</code> in <code>MappingJackson2HttpMessageConverter</code>.</p>
</li>
<li><p>Wrap the <code>ClientHttpRequestFactory</code> with <code>BufferingClientHttpRequestFactory</code> to force buffering (and thus set <code>Content-Length</code>).</p>
</li>
<li><p>Send the data as a <code>String</code> (which uses <code>StringHttpMessageConverter</code> that provides a <code>Content-Length</code>).</p>
</li>
</ol>
<p>This change was documented in the <a target="_blank" href="https://github.com/spring-projects/spring-framework/wiki/Spring-Framework-6.1-Release-Notes">Spring Framework 6.1 Release Notes</a> to optimize memory usage. It’s a crucial reminder that keeping up with release notes is just as important as writing clean code!</p>
]]></content:encoded></item><item><title><![CDATA[What Happens to the Thread Count When Running a JAR on Linux?]]></title><description><![CDATA[JVM threads are mapped directly to OS threads.


This is known as the Native Thread Implementation or the 1:1 Threading Model.

Thread Pool in Spring Boot

By default, Spring Boot creates a thread pool when using an embedded Tomcat server.

The defau...]]></description><link>https://www.codeslapping.com/what-happens-to-the-thread-count-when-running-a-jar-on-linux</link><guid isPermaLink="true">https://www.codeslapping.com/what-happens-to-the-thread-count-when-running-a-jar-on-linux</guid><category><![CDATA[Threads]]></category><category><![CDATA[jar]]></category><dc:creator><![CDATA[GyungJae Ham]]></dc:creator><pubDate>Wed, 07 Jan 2026 01:36:45 GMT</pubDate><content:encoded><![CDATA[<ul>
<li>JVM threads are mapped directly to OS threads.</li>
</ul>
<ul>
<li>This is known as the <strong>Native Thread Implementation</strong> or the <strong>1:1 Threading Model</strong>.</li>
</ul>
<h3 id="heading-thread-pool-in-spring-boot">Thread Pool in Spring Boot</h3>
<ul>
<li><p>By default, Spring Boot creates a thread pool when using an embedded Tomcat server.</p>
</li>
<li><p>The default configuration allows for a maximum of <strong>200 threads</strong>.</p>
</li>
<li><p>Each incoming request is handled by a worker thread from this pool.</p>
</li>
</ul>
<hr />
<h2 id="heading-q1-if-i-spin-up-a-container-using-a-docker-image-does-that-container-get-assigned-processes-and-threads-from-the-host-os">Q1. If I spin up a container using a Docker image, does that container get assigned processes and threads from the host OS?</h2>
<h3 id="heading-1-relationship-between-containers-and-processes">1. Relationship Between Containers and Processes</h3>
<ul>
<li><p>A Docker container runs as an isolated group of processes on the host OS.</p>
</li>
<li><p>It leverages Linux <strong>namespaces</strong> and <strong>cgroups</strong> to isolate processes and limit resource usage.</p>
</li>
<li><p>In reality, it shares the host OS kernel.</p>
</li>
</ul>
<h3 id="heading-2-in-the-case-of-spring-boot-applications">2. In the Case of Spring Boot Applications</h3>
<ul>
<li><p>The JVM running inside the container is also a process on the host OS.</p>
</li>
<li><p>JVM threads are still mapped to the host OS's native threads.</p>
</li>
<li><p>However, they are subject to the container's resource constraints (CPU, Memory, etc.).</p>
</li>
</ul>
<h3 id="heading-3-verification-commands">3. Verification Commands</h3>
<p>Bash</p>
<pre><code class="lang-dockerfile"><span class="hljs-comment"># Check container processes from the host OS</span>
docker top [container-id]

<span class="hljs-comment"># Check processes inside the container</span>
docker exec [container-id] ps -ef

<span class="hljs-comment"># Check specific threads</span>
docker exec [container-id] ps -eLf
</code></pre>
<h3 id="heading-1-container-structure">1. Container Structure</h3>
<ul>
<li><p>Each container includes its own application, libraries, and runtime.</p>
</li>
<li><p>Containers remain isolated from one another.</p>
</li>
</ul>
<h3 id="heading-2-namespace-isolation">2. Namespace Isolation</h3>
<ul>
<li><p><strong>PID Namespace</strong>: Isolates Process IDs.</p>
<ul>
<li>A process may have PID 1 inside the container but appear as PID 1234 on the host system.</li>
</ul>
</li>
<li><p><strong>Network Namespace</strong>: Isolates the network stack.</p>
<ul>
<li><p>Provides each container with an independent network stack (interfaces, IP addresses, routing tables, port numbers, iptables rules).</p>
</li>
<li><p>When a container is created, Docker generates a <strong>veth (virtual ethernet)</strong> pair.</p>
</li>
<li><p>One end resides in the host's network namespace, while the other (eth0) resides in the container's.</p>
</li>
<li><p>These interfaces act like a pipe to forward traffic.</p>
</li>
</ul>
</li>
<li><p><strong>Port Mapping</strong></p>
<ul>
<li><code>-p 8080:80</code></li>
</ul>
</li>
<li><p><strong>Mount Namespace</strong>: Isolates filesystem mount points.</p>
</li>
<li><p><strong>User Namespace</strong>: Isolates User and Group IDs.</p>
</li>
</ul>
<h3 id="heading-3-resource-control-cgroups">3. Resource Control (cgroups)</h3>
<ul>
<li>Manages CPU, Memory, and I/O usage.</li>
</ul>
<h3 id="heading-4-kernel-sharing">4. Kernel Sharing</h3>
<ul>
<li>All containers share the same host OS kernel and access system resources through it.</li>
</ul>
<hr />
<h2 id="heading-q2-in-a-kotlin-spring-boot-environment-are-the-threads-in-the-dispatchers-pool-used-by-coroutines-also-os-threads">Q2. In a Kotlin Spring Boot environment, are the threads in the Dispatchers pool used by Coroutines also OS threads?</h2>
<h3 id="heading-1-dispatcher-thread-pools">1. Dispatcher Thread Pools</h3>
<ul>
<li><p><strong>Dispatchers.Default</strong>: Uses a thread pool proportional to the number of CPU cores (Minimum 2, Maximum: CPU cores + 1).</p>
</li>
<li><p><a target="_blank" href="http://Dispatchers.IO"><strong>Dispatchers.IO</strong></a>: Uses a shared thread pool (defaults to up to 64 threads).</p>
</li>
<li><p>All of these are indeed <strong>native OS threads</strong>.</p>
</li>
</ul>
<h3 id="heading-2-coroutines-vs-threads">2. Coroutines vs. Threads</h3>
<ul>
<li><p>While Coroutines are called "lightweight threads," they are actually units of work executed on top of threads.</p>
</li>
<li><p>Multiple coroutines can run on a single thread (<strong>M:N mapping</strong>).</p>
</li>
<li><p>Switching between coroutines is significantly "lighter" than switching between threads.</p>
</li>
</ul>
<hr />
<h2 id="heading-q3-when-a-coroutine-scope-is-executed-is-a-specific-thread-type-assigned-and-is-switching-between-coroutines-really-cheaper-than-thread-context-switching">Q3. When a Coroutine Scope is executed, is a specific thread type assigned? And is switching between coroutines really cheaper than thread context switching?</h2>
<h3 id="heading-1-coroutine-scope-and-dispatchers">1. Coroutine Scope and Dispatchers</h3>
<p>Kotlin</p>
<pre><code class="lang-kotlin"><span class="hljs-comment">// Determine which thread pool to use via the Dispatcher</span>
CoroutineScope(Dispatchers.Default).launch {
    <span class="hljs-comment">// All coroutines in this scope use the Default thread pool</span>

    <span class="hljs-comment">// Switching between coroutines (e.g., during suspension) is extremely lightweight</span>
    <span class="hljs-keyword">val</span> result1 = async { heavyComputation() }
    <span class="hljs-keyword">val</span> result2 = async { anotherComputation() }
}
</code></pre>
<h3 id="heading-2-context-switching-cost-comparison">2. Context Switching Cost Comparison</h3>
<ul>
<li><p><strong>Thread Context Switch</strong>: Occurs at the OS level (Expensive).</p>
<ul>
<li>Requires saving/loading CPU register states: <strong>Program Counter (PC)</strong>, <strong>Stack Pointer (SP)</strong>, General-purpose registers, and Status registers.</li>
</ul>
</li>
</ul>
<blockquote>
<p><strong>Thread Switching Process:</strong></p>
<ol>
<li><p>Thread A is running -&gt; Save Thread A's register values to memory (in the <strong>PCB</strong>).</p>
</li>
<li><p>Load Thread B's register values from memory.</p>
</li>
<li><p>Start Thread B execution.</p>
</li>
</ol>
<p>What is a PCB (Process Control Block)?</p>
<p>A metadata structure maintained by the OS to manage each process.</p>
<ul>
<li><p><strong>Management Info</strong>: PID, Status, Priority, PC, CPU Registers, Scheduling info.</p>
</li>
<li><p><strong>Memory Info</strong>: Allocation, Page/Segment tables.</p>
</li>
<li><p><strong>File/IO Info</strong>: Open files, File descriptors, I/O devices.</p>
</li>
</ul>
</blockquote>
<ul>
<li><p><strong>Memory Map Switching</strong>: Switching the virtual memory address space.</p>
<ul>
<li><p><strong>TLB Flushing</strong>: Increases page table walks as the translation cache becomes invalid.</p>
</li>
<li><p><strong>Increased Cache Misses/Page Faults</strong>: Since the new process needs different data, memory access latency increases.</p>
</li>
</ul>
</li>
<li><p><strong>CPU Cache Invalidation</strong>: Data in the L1/L2/L3 caches may no longer be valid for the new thread, leading to cache misses.</p>
</li>
<li><p><strong>Coroutine Switching</strong>:</p>
<ul>
<li><p>Occurs within the same thread.</p>
</li>
<li><p>Only saves execution state and stack information.</p>
</li>
<li><p>Extremely lightweight with minimal memory access.</p>
</li>
</ul>
</li>
</ul>
<p>Kotlin</p>
<pre><code class="lang-kotlin"><span class="hljs-comment">// Conceptual structure of Coroutine Suspension (Continuation)</span>
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">example</span><span class="hljs-params">(continuation: <span class="hljs-type">ExampleContinuation</span>)</span></span> {
    <span class="hljs-keyword">when</span>(continuation.label) {
        <span class="hljs-number">0</span> -&gt; {
            continuation.x = <span class="hljs-number">10</span>
            continuation.label = <span class="hljs-number">1</span> 
            delay(<span class="hljs-number">1000</span>, continuation)
        }
        <span class="hljs-number">1</span> -&gt; {
            <span class="hljs-keyword">val</span> x = continuation.x <span class="hljs-comment">// Restore state</span>
            println(x)
            continuation.label = <span class="hljs-number">2</span>
            delay(<span class="hljs-number">2000</span>, continuation)
        }
        <span class="hljs-number">2</span> -&gt; {
            <span class="hljs-keyword">val</span> x = continuation.x
            println(x + <span class="hljs-number">1</span>)
        }
    }
}
</code></pre>
<hr />
<h2 id="heading-q4-if-i-set-the-spring-boot-max-thread-pool-to-200-are-async-and-coroutine-threads-created-separately">Q4. If I set the Spring Boot Max Thread Pool to 200, are Async and Coroutine threads created separately?</h2>
<p>Yes, they use <strong>independent thread pools</strong>:</p>
<ol>
<li><p><strong>Tomcat Thread Pool</strong>: <code>server.tomcat.threads.max=200</code> (For HTTP requests).</p>
</li>
<li><p><strong>Spring @Async Pool</strong>: Configured separately via <code>ThreadPoolTaskExecutor</code>.</p>
</li>
<li><p><strong>Coroutine Dispatcher Pool</strong>: <code>Default</code> and <code>IO</code> dispatchers maintain their own pools.</p>
</li>
</ol>
<p>Total Thread Count = Tomcat Threads + @Async Threads + Coroutine Threads.</p>
<p>In a Docker environment, you must be cautious as the sum of these pools can lead to heavy resource contention.</p>
<hr />
<h2 id="heading-q5-if-i-run-a-jar-in-docker-with-max-threads-at-200-2-async-threads-and-default-coroutine-settings-how-are-coroutine-threads-calculated">Q5. If I run a JAR in Docker with Max Threads at 200, 2 Async threads, and default Coroutine settings, how are Coroutine threads calculated?</h2>
<ul>
<li><p><strong>Tomcat</strong>: Max 200.</p>
</li>
<li><p><strong>Async</strong>: 2.</p>
</li>
<li><p><strong>Dispatchers.Default</strong>: <code>min(Core Count * 2, 128)</code>.</p>
</li>
<li><p><a target="_blank" href="http://Dispatchers.IO"><strong>Dispatchers.IO</strong></a>: Max 64.</p>
</li>
</ul>
<p>For an <strong>8-core system</strong>:</p>
<ul>
<li><p>$200 (Tomcat) + 2 (Async) + 16 (Default) + 64 (IO) = 282$ potential threads.</p>
<p>  Note: Threads are created/destroyed dynamically based on demand; they aren't all allocated at startup.</p>
</li>
</ul>
<hr />
<h2 id="heading-q6-what-happens-to-resources-if-i-spin-up-a-second-identical-container-on-the-same-host">Q6. What happens to resources if I spin up a second identical container on the same host?</h2>
<ul>
<li><p><strong>Container 1</strong>: Up to 282 threads.</p>
</li>
<li><p><strong>Container 2</strong>: Up to 282 threads.</p>
</li>
<li><p><strong>Total</strong>: Up to 564 threads.</p>
</li>
</ul>
<p><strong>Critical Point</strong>: Both containers share the same host CPU and memory. Without limits, they will compete for resources, leading to <strong>resource contention</strong> and performance degradation. It is best practice to define limits:</p>
<p>YAML</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># docker-compose.yml example</span>
<span class="hljs-attr">services:</span>
  <span class="hljs-attr">app1:</span>
    <span class="hljs-attr">cpus:</span> <span class="hljs-string">'4'</span>
    <span class="hljs-attr">pids_limit:</span> <span class="hljs-number">150</span>
  <span class="hljs-attr">app2:</span>
    <span class="hljs-attr">cpus:</span> <span class="hljs-string">'4'</span>
    <span class="hljs-attr">pids_limit:</span> <span class="hljs-number">150</span>
</code></pre>
<hr />
<h2 id="heading-q7-maximize-performance-large-server-with-many-containers-vs-multiple-small-servers">Q7. Maximize Performance: Large Server with Many Containers vs. Multiple Small Servers?</h2>
<table><tbody><tr><td><p><strong>Strategy</strong></p></td><td><p><strong>Pros</strong></p></td></tr><tr><td><p><strong>Many Containers on One Server</strong></p></td><td><p>Efficient resource sharing, simpler management, lower cost, fast inter-container communication.</p></td></tr><tr><td><p><strong>Distributed over Small Servers</strong></p></td><td><p>Better fault isolation, easier individual scaling, less resource contention, hardware redundancy.</p></td></tr></tbody></table>

<p><strong>Verdict</strong>: It depends. Complementary services should be grouped, while high-load/mission-critical services should be isolated.</p>
<hr />
<h2 id="heading-q8-in-k8s-1-master-3-nodes-does-scaling-out-mean-adding-more-containers-to-the-same-server">Q8. In K8s (1 Master, 3 Nodes), does scaling out mean adding more containers to the same server?</h2>
<p>Scaling out in Kubernetes follows two paths:</p>
<ol>
<li><p><strong>Sufficient Node Resources</strong>: The K8s Scheduler places new Pods (containers) on existing nodes. Multiple containers will run on one server.</p>
</li>
<li><p><strong>Insufficient Node Resources</strong>: Pods enter a <strong>Pending</strong> state. You must add physical/virtual nodes. In cloud environments, <strong>Cluster Autoscaler</strong> can automate this.</p>
</li>
</ol>
<p><strong>Pro-tip</strong>: Always define <code>resources.requests</code> and <code>limits</code> to make scaling predictable.</p>
]]></content:encoded></item><item><title><![CDATA[The Realization: Why I Needed TDD]]></title><description><![CDATA[My First Encounter with Test Code

When I first joined my previous company, I found that despite being a solution provider, there wasn't a properly established core solution. We had one product delivered to a client, but the codebase—likely due to a ...]]></description><link>https://www.codeslapping.com/the-realization-why-i-needed-tdd</link><guid isPermaLink="true">https://www.codeslapping.com/the-realization-why-i-needed-tdd</guid><category><![CDATA[TDD (Test-driven development)]]></category><dc:creator><![CDATA[GyungJae Ham]]></dc:creator><pubDate>Wed, 07 Jan 2026 01:24:46 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-my-first-encounter-with-test-code">My First Encounter with Test Code</h2>
<hr />
<p>When I first joined my previous company, I found that despite being a solution provider, there wasn't a properly established core solution. We had one product delivered to a client, but the codebase—likely due to a rushed timeline—lacked consistent conventions. I immediately felt that this project desperately needed verification for its tangled logic and various technical debts.</p>
<p>As we began building a new version of the solution, I proposed the introduction of test codes to my colleagues. Although they had little experience with automated testing, and I hadn't written extensive test code in a production environment myself, we all agreed on its necessity and decided to move forward.</p>
<h2 id="heading-feeling-the-necessity-of-tdd">Feeling the Necessity of TDD</h2>
<hr />
<p>The journey wasn't exactly smooth, but following the footsteps of many senior developers online, I gradually integrated test codes into the project. I added tests to existing business logic and established rules for different testing scopes. At the time, I thought I was writing well-organized, rule-compliant test code.</p>
<p>However, at some point, I felt that something was missing. Because I was writing tests <em>after</em> the business logic was already finished, the tests couldn't influence the design itself. In some cases, the tests ended up merely validating poorly designed or flawed logic, making them feel less than half-effective.</p>
<p>That was when <strong>TDD (Test-Driven Development)</strong> caught my eye. TDD seemed like the perfect approach to improve both design capabilities and requirement verification. Although it felt like a daunting concept at first, I began learning it through formal training. This post is a record of those trials and errors.</p>
<h2 id="heading-feedback-summary-before-the-3rd-session">Feedback Summary (Before the 3rd Session)</h2>
<hr />
<ol>
<li><p>Use <code>hasSize()</code> when validating the size of a Collection.</p>
</li>
<li><p>Parameters in <code>@ParameterizedTest</code> support automatic type conversion.</p>
</li>
<li><p>Replace Magic Numbers and Magic Strings with constants.</p>
</li>
<li><p>Do not abbreviate variable names.</p>
</li>
<li><p>Minimize the use of getters and setters.</p>
</li>
<li><p>Consider the specific nuance of variable names (e.g., <code>check</code> vs. <code>validate</code> vs. <code>verify</code>).</p>
</li>
<li><p>If something is difficult to test (like random values), push the responsibility to a higher-level object.</p>
</li>
<li><p>Instead of using a getter, send a message to the object (Ask, don't tell).</p>
</li>
<li><p>Keep indentation levels to a maximum of one.</p>
</li>
<li><p>Limit classes to a maximum of 2-3 instance variables.</p>
</li>
<li><p>Wrap primitive values in classes (Value Objects).</p>
</li>
<li><p>Utilize First-Class Collections.</p>
</li>
<li><p>Follow the "Object Calisthenics" principles.</p>
</li>
</ol>
<h2 id="heading-object-calisthenics">Object Calisthenics</h2>
<hr />
<p>Rather than viewing these as rigid laws, I treat them as guidelines that increase the probability of producing high-quality code.</p>
<ol>
<li><p>Only one level of indentation per method.</p>
</li>
<li><p>Don't use the <code>else</code> keyword.</p>
</li>
<li><p>Wrap all primitives and strings.</p>
</li>
<li><p>Use only one dot per line (e.g., <code>object.method()</code>, avoiding deep chaining).</p>
</li>
<li><p>Don't abbreviate.</p>
</li>
<li><p>Keep all entities small.</p>
</li>
<li><p>No classes with more than three instance variables.</p>
</li>
<li><p>Use First-Class Collections.</p>
</li>
<li><p>No getters, setters, or properties (where possible).</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Fixing Frozen Timestamps in Docker & Spring: A Lesson in Bean Lifecycle]]></title><description><![CDATA[The Issue

I discovered a bug where the dates in the user activity stream tab—which summarizes the last three days of activity—were stuck at the exact time the Docker container was started.
Troubleshooting

Attempt 1: Syncing Container Time with Loca...]]></description><link>https://www.codeslapping.com/fixing-frozen-timestamps-in-docker-and-spring-a-lesson-in-bean-lifecycle</link><guid isPermaLink="true">https://www.codeslapping.com/fixing-frozen-timestamps-in-docker-and-spring-a-lesson-in-bean-lifecycle</guid><category><![CDATA[Docker]]></category><category><![CDATA[Dockerfile]]></category><dc:creator><![CDATA[GyungJae Ham]]></dc:creator><pubDate>Wed, 07 Jan 2026 00:59:26 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-the-issue">The Issue</h2>
<hr />
<p>I discovered a bug where the dates in the user activity stream tab—which summarizes the last three days of activity—were stuck at the exact time the Docker container was started.</p>
<h2 id="heading-troubleshooting">Troubleshooting</h2>
<hr />
<h3 id="heading-attempt-1-syncing-container-time-with-local-server-time">Attempt 1: Syncing Container Time with Local Server Time</h3>
<p>Initially, I tried to synchronize the container's <code>localtime</code> with the host server's <code>localtime</code>. When the issue occurred, I ran the <code>date</code> command on the server, and it correctly displayed KST. However, the time inside the container was still being output in UTC. My Docker Compose configuration was set to mount <code>/etc/localtime:/etc/localtime</code>.</p>
<h3 id="heading-attempt-2-syncing-via-timezone-file">Attempt 2: Syncing via Timezone File</h3>
<p>After some research, I found feedback suggesting that syncing via <code>/etc/timezone</code> is often more reliable than using <code>/etc/localtime</code>. I applied this change immediately, but the container output remained in UTC.</p>
<h3 id="heading-attempt-3-installing-tzdata-in-the-dockerfile">Attempt 3: Installing <code>tzdata</code> in the Dockerfile</h3>
<p>To ensure the container could handle time dynamically, I modified the Dockerfile to install the <code>tzdata</code> package. I configured it to copy the <code>zoneinfo</code> of the desired timezone to the container's <code>/etc/localtime</code> and overwrite <code>/etc/timezone</code> with the specific TZ value.</p>
<p>Dockerfile</p>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">ENV</span> TZ=Asia/Seoul
<span class="hljs-keyword">RUN</span><span class="bash"> apk add --no-cache tzdata &amp;&amp; \
    cp /usr/share/zoneinfo/<span class="hljs-variable">$TZ</span> /etc/localtime &amp;&amp; \
    <span class="hljs-built_in">echo</span> <span class="hljs-variable">$TZ</span> &gt; /etc/timezone</span>
</code></pre>
<p>After starting the container with this new image, the internal system time finally matched the server time.</p>
<h2 id="heading-re-evaluating-the-root-cause">Re-evaluating the Root Cause</h2>
<hr />
<p>Despite fixing the system time, the API was still returning a fixed timestamp. I kept wondering, "How can <code>LocalDateTime</code>, which is supposed to fetch the system time, be a constant value?"</p>
<p>The fact that even the nanoseconds were identical was the smoking gun. It suddenly dawned on me: when defining a <code>DateUtil</code> class for global use, I had declared the date variables as <code>static final</code>.</p>
<p>Feeling a bit foolish, I removed the <code>final</code> keywords and tried registering the class as a Spring <code>@Component</code> (Bean). But of course, the value was still fixed—the instance variable held the value initialized when the singleton bean was first created. It was a lapse in judgment during debugging. After some self-reflection, I refactored the code to either use a standard class or initialize the time at the exact point of execution.</p>
<p>After running all test cases and deploying to the dev server, I confirmed the API works perfectly.</p>
<h2 id="heading-retrospective">Retrospective</h2>
<hr />
<p>I'm documenting this to remind myself to be more deliberate during development. Following the recent login issue, I really need to break the habit of reflexively registering everything as a Bean without considering its lifecycle.</p>
]]></content:encoded></item><item><title><![CDATA[From Monolithic to Strategic Decoupling: Ensuring 24/7 Availability in University LMS]]></title><description><![CDATA[This is the architecture of the solution we previously deployed to our clients. Most of our clients were universities (occasionally graduate schools, but the requirements were largely similar). These clients generally fell into three categories: thos...]]></description><link>https://www.codeslapping.com/from-monolithic-to-strategic-decoupling-ensuring-247-availability-in-university-lms</link><guid isPermaLink="true">https://www.codeslapping.com/from-monolithic-to-strategic-decoupling-ensuring-247-availability-in-university-lms</guid><category><![CDATA[architecture]]></category><category><![CDATA[refactoring]]></category><dc:creator><![CDATA[GyungJae Ham]]></dc:creator><pubDate>Wed, 07 Jan 2026 00:55:41 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767747218405/369b8974-4a63-4d86-9e20-58fc885d41ae.png" alt class="image--center mx-auto" /></p>
<p>This is the architecture of the solution we previously deployed to our clients. Most of our clients were universities (occasionally graduate schools, but the requirements were largely similar). These clients generally fell into three categories: those who outsourced server management to a third party, those with an in-house IT team managing the servers, and those with no dedicated IT team or physical server infrastructure at all.</p>
<p>Despite these three categories, they all shared a common constraint: an environment where <strong>scaling out servers at will was impossible</strong>, primarily due to budget limitations. Given these constraints, our initial delivery strategy was to request the highest-specification server the university could provide and deploy all services onto that single instance. Consequently, all business logic was concentrated within a single monolithic backend server.</p>
<p>Initially, this didn't pose significant issues. Since the solution was typically managed by the university's IT team or relevant departments after delivery, we didn't have much exposure to the operational challenges. However, the friction in this architecture began to surface when a client signed a larger contract that included a full year of operational support for their freshman class.</p>
<h3 id="heading-requirements">Requirements</h3>
<hr />
<p>As the project included a commitment to develop additional requirements, the university requested features like QR check-ins for mid-semester events (festivals, faculty consultations, etc.). The challenge arose from the nature of an LMS (Learning Management System) within a university setting: students can engage in coursework at any time.</p>
<p>Therefore, the server had to be constantly available to handle attendance data. We realized that shutting down the server to deploy new features could potentially compromise the <strong>data integrity and reliability</strong> of attendance records.</p>
<p>After some deliberation, we opted for a manual <strong>Blue/Green deployment</strong> strategy. We would update half of the running containers first, followed by the remaining half. While this method didn't cause any major incidents—aside from being incredibly tedious and nerve-wracking every time—it allowed us to navigate the semester successfully.</p>
<p>Once the semester concluded and the final grades were delivered, I began to reconsider the architecture for future deployments.</p>
<h3 id="heading-decoupling-mission-critical-functions">Decoupling Mission-Critical Functions</h3>
<hr />
<p>Based on the insights gained during operations, I decided to isolate the functions that required constant uptime. These primarily included the attendance module and the logic for aggregating assignment and quiz scores. However, since the assignment and quiz logic was subject to frequent changes, I decided not to group it with the attendance module.</p>
<p>Attendance data required <strong>zero-tolerance for error</strong>. Even if a discrepancy occurred, we needed a system that allowed for rapid feedback and correction via the database where all records were stored.</p>
<p>I further decoupled the servers related to attendance. Since the attendance server needed to be constantly available when requesting data, I designed separate servers to collect video viewing data and Zoom session information.</p>
<h3 id="heading-lessons-from-decoupling">Lessons from Decoupling</h3>
<hr />
<p>To be clear, decoupling isn't purely beneficial. It adds complexity to areas that were previously easier to develop within a single server. Our decision wasn't driven by a blind desire to follow the MSA (Microservices Architecture) hype; it was a pragmatic choice to reduce the operational burden for future projects. Since we rarely use more than two servers, managing an increasing number of containers could have led to overhead, especially since Kubernetes was overkill for our scale.</p>
<p>However, the conclusion so far is one of <strong>great satisfaction</strong>. While there are downsides—such as having to fetch data via external API communication—the overall service feels significantly more robust and stable. In the next post, I will review the performance and speed aspects of this change.</p>
]]></content:encoded></item><item><title><![CDATA[[Retrospective] The Dangers of Shared State: Why Global Variables in Singletons Are a Nightmare]]></title><description><![CDATA[Background
Hi, I’m a backend engineer currently working on building and maintaining LMS (Learning Management System) solutions for universities. Due to some "lucky" (?) timing with previous team members departing shortly after I joined, I found mysel...]]></description><link>https://www.codeslapping.com/7iod6rcb7zwy66m07iscioqwnouwno2vtoyvvcdtlzjripqg7j207jyg</link><guid isPermaLink="true">https://www.codeslapping.com/7iod6rcb7zwy66m07iscioqwnouwno2vtoyvvcdtlzjripqg7j207jyg</guid><category><![CDATA[Singleton Design Pattern]]></category><dc:creator><![CDATA[GyungJae Ham]]></dc:creator><pubDate>Wed, 07 Jan 2026 00:21:43 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-background">Background</h2>
<p>Hi, I’m a backend engineer currently working on building and maintaining LMS (Learning Management System) solutions for universities. Due to some "lucky" (?) timing with previous team members departing shortly after I joined, I found myself leading the current projects. This post is a self-reflection—and a bit of a self-reprimand—to ensure I never repeat the same mistakes again.</p>
<p>To give you some context, our team’s tech stack includes:</p>
<ul>
<li><p><strong>Language/Framework:</strong> Java 17, Spring Boot 3.x</p>
</li>
<li><p><strong>Persistence/Query:</strong> JPA, QueryDSL, MySQL, PostgreSQL</p>
</li>
<li><p><strong>Infrastructure/Middleware:</strong> Redis, Docker, Nginx, Ubuntu 20.04/22.04</p>
</li>
</ul>
<hr />
<h2 id="heading-the-incident-a-collision-of-state">The Incident: A Collision of State</h2>
<p>The issue arose during the process of passing classroom information to <strong>Canvas LMS</strong> via <strong>SAML authentication</strong>. Specifically, the problem occurred when trying to include metadata about the specific classroom a user intended to access within the SAML payload.</p>
<p><strong>The Initial (Flawed) Approach:</strong> My initial thought was: <em>"When a user requests the SAML login URL, let's capture the target classroom info, store it in an object, and then use that data to authenticate and redirect the user."</em></p>
<p>However, I hit a roadblock. There was no straightforward way to retrieve that stored information when Canvas sent the callback request back to our server. Since Canvas didn't even provide identifying information about which user was making the request at that specific stage, I couldn't simply persist it in the database and query it later.</p>
<p>Under pressure, I made a decision I now deeply regret—a decision born from not yet having "felt" the dangers of stateful Singletons in my bones: <strong>I decided to store these user-specific details in a global variable (static/class-level field) within a Singleton bean.</strong></p>
<h2 id="heading-the-symptom-it-works-on-my-machine">The Symptom: It Works on My Machine</h2>
<p>Predictably, the logic worked perfectly during local development. Since I was the only one testing, there were no concurrent requests to expose the horror of global variables. This was also a failure of our QA process; we didn't adequately simulate high-concurrency scenarios.</p>
<p>The nightmare finally manifested during a <strong>live client demonstration</strong>. As multiple stakeholders logged in simultaneously, they began seeing other people's names and accessing classrooms belonging to different users. It was a catastrophic "identity swap" scenario that I’d rather forget.</p>
<hr />
<h2 id="heading-the-fix-from-dirty-patches-to-proper-state-management">The Fix: From Dirty Patches to Proper State Management</h2>
<p><strong>Attempt 1 (The Naive Fix):</strong> In a panic, my first thought was to keep the global variable but immediately nullify/initialize it after the redirection. This didn't solve the problem; it only reduced the error frequency. If another request hit the server in the millisecond before the variable was cleared, the same collision occurred. Concurrency is not something you can "race" against with manual overrides.</p>
<p><strong>Attempt 2 (The Robust Solution):</strong> The final and correct approach was to leverage <strong>Client-side State</strong>. I chose to store the encrypted classroom metadata in a <strong>Cookie</strong>. When the flow redirected to the point where Canvas needed the data, the server could retrieve it directly from the user's browser request. This effectively decoupled the state from the server's memory and tied it to the individual user's session. This completely resolved the identity-swapping issue.</p>
<hr />
<h2 id="heading-lessons-learned">Lessons Learned</h2>
<p>They say experience is a hard teacher because she gives the test first and the lesson afterward. While this was a "good" experience in the sense that I will now be obsessively cautious about state management and Singleton design, I realize that with just a bit more deep thinking, I could have avoided such a fundamental architectural flaw.</p>
<p>If you’ve experienced something similar, you have my deepest sympathies. If you haven't—let this be a warning: <strong>Be extremely vigilant when handling state within Singleton objects.</strong> In a multi-threaded environment, global variables are a ticking time bomb.</p>
]]></content:encoded></item></channel></rss>