<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Sagar Budhathoki]]></title><description><![CDATA[A Python/DevOps Engineer with hands-on experience in cloud computing, automating and optimizing mission-critical deployments in AWS, leveraging configuration ma]]></description><link>https://blog.budhathokisagar.com.np</link><generator>RSS for Node</generator><lastBuildDate>Mon, 20 Apr 2026 18:30:34 GMT</lastBuildDate><atom:link href="https://blog.budhathokisagar.com.np/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Ubuntu Random Freezes Fix: Limit Power PL1]]></title><description><![CDATA[My Ubuntu server kept freezing. No warning. No error. Just completely stuck. Screen frozen, keyboard dead, only option was to hold the power button and force restart.
This happened randomly. Sometimes]]></description><link>https://blog.budhathokisagar.com.np/ubuntu-random-freezes-fix-limit-power-pl1</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/ubuntu-random-freezes-fix-limit-power-pl1</guid><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Fri, 13 Mar 2026 06:28:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/61c71decf6c63f538e6f4749/bc8641d3-55e0-455d-90e5-4d2e8a2be3d9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>My Ubuntu server kept freezing. No warning. No error. Just completely stuck. Screen frozen, keyboard dead, only option was to hold the power button and force restart.</p>
<p>This happened randomly. Sometimes after a few hours, sometimes after a few days. I couldn't find a pattern. I checked RAM, disk, GPU, drivers, Docker, everything. All looked fine.</p>
<p>Turns out the problem was my CPU eating too much power. And the fix was literally one command.</p>
<hr />
<h2>Some Background</h2>
<p>My server runs on an Intel i9-14900K with a Gigabyte motherboard and an NVIDIA RTX 4090. It handles PostgreSQL, Docker containers, Python scripts, and some web scraping workers. Nothing crazy. But every now and then, when multiple things run at the same time, the whole system just freezes.</p>
<p>No logs. No crash dump. No kernel panic. Just dead.</p>
<hr />
<h2>What Are PL1 and PL2?</h2>
<p>Every Intel CPU has two power limits built in. Think of them as rules that tell the CPU how much electricity it's allowed to use.</p>
<p><strong>PL1 (Power Limit 1)</strong> is the long-term limit. This is how much power the CPU can use continuously. All day, every day. Intel says for the i9-14900K, this should be <strong>125 watts</strong>.</p>
<p><strong>PL2 (Power Limit 2)</strong> is the short-term boost limit. When you suddenly need more power (compiling code, running heavy queries), the CPU is allowed to jump up to PL2 for a few seconds temporarily. Intel says this should be <strong>253 watts</strong>. After a short burst, it must return to PL1.</p>
<p>So the normal behavior should look like this:</p>
<pre><code class="language-plaintext">Normal load:    30W ... 50W ... 40W ... 60W
Sudden spike:   30W ... 50W ... 253W! ... 253W! ... back to 125W ... 60W ... 40W
                                ^^^^^^^^^^^^^^^^^^^^
                                PL2 boost (few seconds only)
</code></pre>
<p>The CPU goes fast when needed, then calms down. That's healthy.</p>
<hr />
<h2>What Was Wrong on My Server?</h2>
<p>I checked my power limits with these commands:</p>
<pre><code class="language-bash">PL1=$(cat /sys/class/powercap/intel-rapl:0/constraint_0_power_limit_uw)
echo "PL1: $((PL1 / 1000000))W"

PL2=$(cat /sys/class/powercap/intel-rapl:0/constraint_1_power_limit_uw)
echo "PL2: $((PL2 / 1000000))W"
</code></pre>
<p>Result:</p>
<pre><code class="language-plaintext">PL1: 253W
PL2: 253W
</code></pre>
<p>Both set to 253W. That means there is no "calm down" phase. The CPU is allowed to pull 253 watts forever if the workload demands it. No speed limit. Full power all the time.</p>
<p>This is a known issue with Gigabyte motherboards (and ASUS, MSI too). They ship with power limits set to maximum by default. Why? Because higher power means higher benchmark scores. Looks great in reviews. Not so great for stability.</p>
<hr />
<h2>Why This Causes Freezes</h2>
<p>Imagine a water pipe rated for 125 liters per minute. Someone removed the limit and now 253 liters can flow through it. Most of the time you only use 10-20 liters, so nothing happens. But when many taps open at once, 253 liters rush through. The pipe can't handle the pressure and bursts.</p>
<p>Same thing with the CPU. At idle, my server uses about 2-3 watts. Normal workload is maybe 30-60 watts. No problem. But when PostgreSQL runs a heavy query while Docker containers are scraping websites and Python scripts are processing data, all at once, the CPU pulls 200+ watts sustained. The chip gets extremely hot, the motherboard power delivery gets stressed, and the whole system locks up.</p>
<p>Intel actually acknowledged this problem for 13th and 14th gen CPUs. They released microcode updates and told everyone to set PL1 back to 125W. Many people were having crashes and didn't know why.</p>
<hr />
<h2>The Fix</h2>
<p><strong>Step 1: Check your current power limits</strong></p>
<pre><code class="language-bash">PL1=$(cat /sys/class/powercap/intel-rapl:0/constraint_0_power_limit_uw)
echo "PL1: $((PL1 / 1000000))W"

PL2=$(cat /sys/class/powercap/intel-rapl:0/constraint_1_power_limit_uw)
echo "PL2: $((PL2 / 1000000))W"
</code></pre>
<p>If PL1 shows anything above 125W, that's your problem.</p>
<p><strong>Step 2: Fix it right now (instant, no reboot)</strong></p>
<pre><code class="language-bash">echo 125000000 | sudo tee /sys/class/powercap/intel-rapl:0/constraint_0_power_limit_uw
</code></pre>
<p>That's it. One command. PL1 is now 125W. Your CPU will still boost to 253W for short bursts (PL2), but it won't stay there. It will always come back down to 125W.</p>
<p><strong>Step 3: Verify it worked</strong></p>
<pre><code class="language-bash">PL1=$(cat /sys/class/powercap/intel-rapl:0/constraint_0_power_limit_uw)
echo "PL1: $((PL1 / 1000000))W"
</code></pre>
<p>Should now show 125W.</p>
<p><strong>Step 4: Make it permanent</strong></p>
<p>The command above resets on reboot. You have two options to make it stick.</p>
<p><strong>Option A: Fix in BIOS (recommended)</strong></p>
<p>Reboot, enter BIOS (usually Del or F2 on Gigabyte boards), and look for one of these:</p>
<ul>
<li><p>"Package Power Limit 1" or "PL1"</p>
</li>
<li><p>"Long Duration Power Limit"</p>
</li>
<li><p>Usually under "Tweaker" or "Advanced CPU Settings"</p>
</li>
</ul>
<p>Set it to 125. Save and exit.</p>
<p><strong>Option B: Set it automatically on every boot</strong></p>
<p>Create a systemd service:</p>
<pre><code class="language-bash">sudo bash -c 'cat &gt; /etc/systemd/system/cpu-power-limit.service &lt;&lt; EOF
[Unit]
Description=Set CPU PL1 Power Limit to 125W
After=multi-user.target

[Service]
Type=oneshot
ExecStart=/bin/bash -c "echo 125000000 &gt; /sys/class/powercap/intel-rapl:0/constraint_0_power_limit_uw"

[Install]
WantedBy=multi-user.target
EOF'

sudo systemctl enable cpu-power-limit.service
</code></pre>
<p>Now every time the server boots, PL1 gets set to 125W automatically.</p>
<p><strong>Step 5: Keep Intel microcode updated</strong></p>
<p>Intel released microcode patches specifically for this issue. Make sure it's installed and won't get auto-removed:</p>
<pre><code class="language-bash">sudo apt install intel-microcode
sudo apt-mark manual intel-microcode
</code></pre>
<hr />
<h2>Will This Make My Server Slower?</h2>
<p>No. My server uses 2-3 watts at idle and maybe 30-60 watts during normal work. The 125W limit is more than double what I normally use. The only thing that changes is the CPU can't sustain 253W for minutes at a time anymore. For a server running databases and containers, you will never notice the difference.</p>
<hr />
<h2>Quick Summary</h2>
<p>Server kept freezing randomly. No errors, no logs, just dead. Turned out the motherboard had CPU power limits set way too high (253W sustained instead of Intel's recommended 125W). Under heavy load, the CPU would pull too much power for too long and the system would lock up.</p>
<p>Fixed it with one command:</p>
<pre><code class="language-bash">echo 125000000 | sudo tee /sys/class/powercap/intel-rapl:0/constraint_0_power_limit_uw
</code></pre>
<p>Then made it permanent in BIOS.</p>
<p>If you have a 13th or 14th gen Intel CPU and your system freezes randomly, check your PL1. There's a good chance your motherboard set it way too high.</p>
<p>Thanks!</p>
<p>Happy Debugging!!😀</p>
<hr />
<p><em>Moral of the story: sometimes the fix isn't in the software. Sometimes your motherboard is just being too generous with the power, and your CPU doesn't know when to stop.</em></p>
]]></content:encoded></item><item><title><![CDATA[Kernel Panic, No Display, and NVIDIA Drama -  A Regular Day in DevOps]]></title><description><![CDATA[Let me tell you about this week(Obv. mine).
I pressed the power button on our office’s Ubuntu server. Fans started spinning. CPU light turned on. Everything sounded normal. But the monitor? Completely black. Nothing. Not even the BIOS screen showed u...]]></description><link>https://blog.budhathokisagar.com.np/ubuntu-kernel-panic-no-display-issue</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/ubuntu-kernel-panic-no-display-issue</guid><category><![CDATA[Ubuntu]]></category><category><![CDATA[Kernel]]></category><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Wed, 18 Feb 2026 10:59:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771412298097/bc86213e-177e-43c7-8e36-52269ae78015.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let me tell you about this week(Obv. mine).</p>
<p>I pressed the power button on our office’s Ubuntu server. Fans started spinning. CPU light turned on. Everything sounded normal. But the monitor? Completely black. Nothing. Not even the BIOS screen showed up.</p>
<p>Great. Just great.</p>
<hr />
<h2 id="heading-first-problem-no-display-at-all">First Problem: No Display At All</h2>
<p>Okay, so picture this. The machine is ON. I can hear it running. I can even see the CPU running (due to the awesome-looking LEDs😁). But the screen shows absolutely nothing. Not a single pixel.</p>
<p>I got scared. I thought the motherboard was dead. It's a Gigabyte board, and well... Gigabyte can be dramatic sometimes.</p>
<p>I tried everything. Different monitor, different cables, different ports. Nothing worked.</p>
<p>Then I thought, let me just leave it alone for a bit. Sometimes electronics just need a break. Like me.</p>
<p>So here's what I did:</p>
<ol>
<li><p>Turned it off completely</p>
</li>
<li><p>Unplugged the power cable from the wall</p>
</li>
<li><p>Held the power button for 20 seconds (this drains leftover charge from the board)</p>
</li>
<li><p>Went to the washroom for a nature call</p>
</li>
<li><p>Came back after 30 minutes</p>
</li>
</ol>
<p>Plugged it back in. Pressed power.</p>
<p>Display came on. Just like that.</p>
<p>I know it sounds like magic but it's actually a real thing. Motherboards hold residual charge in their capacitors. Sometimes that charge gets stuck in a weird state and the board refuses to POST. Draining it fully resets everything.</p>
<p>Okay cool. Hardware is alive. But...</p>
<hr />
<h2 id="heading-second-problem-kernel-panic">Second Problem: Kernel Panic</h2>
<p>Instead of my normal login screen, I got this:</p>
<pre><code class="lang-bash">KERNEL PANIC!
Please reboot your computer.
VFS: Unable to mount root fs on unknown-block(0,0)
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771409692141/d730da11-aa73-4355-b54c-11b260d64287.jpeg" alt class="image--center mx-auto" /></p>
<p>For those who haven't seen this before, this is basically the Linux kernel saying "I woke up and I have no idea where the hard drive is. I give up."</p>
<p>That <code>unknown-block(0,0)</code> part means the kernel can't find ANY disk. Not because the disk is broken. But because it doesn't have the right drivers loaded to see the disk. The initramfs (a tiny filesystem that loads first and contains these drivers) was either missing or broken.</p>
<hr />
<h2 id="heading-the-save-booting-an-older-kernel">The Save: Booting an Older Kernel</h2>
<p>Here's something beautiful about Ubuntu. It keeps your old kernels in the boot menu. So even when the latest one breaks, you can go back.</p>
<p>I rebooted, held <code>Shift</code> to open the GRUB menu, picked <strong>Advanced options</strong>, and selected the previous kernel.</p>
<p>It booted perfectly. Everything was fine.</p>
<p>So the disk was fine. Ubuntu was fine. It was just the new kernel that was broken.</p>
<p>Now I needed to find out why.</p>
<hr />
<h2 id="heading-the-investigation">The Investigation</h2>
<p>First thing I checked. What kernels are installed?</p>
<pre><code class="lang-bash">dpkg --list | grep linux-image
</code></pre>
<p>And I spotted it immediately:</p>
<pre><code class="lang-bash">ii  linux-image-6.14.0-37-generic    ...   (working kernel)
iF  linux-image-6.17.0-14-generic    ...   (broken kernel)
</code></pre>
<p>See that <code>iF</code>? The <code>i</code> means installed. The <code>F</code> means <strong>Failed to configure</strong>. The kernel package was downloaded and placed on the system, but something went wrong during setup. The initramfs was never built properly. That's why the kernel couldn't find the disk.</p>
<p>But what caused the failure?</p>
<hr />
<h2 id="heading-the-root-cause-nvidia-said-no">The Root Cause: NVIDIA Said No</h2>
<p>I ran this to try and fix things:</p>
<pre><code class="lang-bash">sudo apt --fix-broken install
</code></pre>
<p>And the error told me everything:</p>
<pre><code class="lang-bash">Error! Bad <span class="hljs-built_in">return</span> status <span class="hljs-keyword">for</span> module build on kernel: 6.17.0-14-generic
dkms autoinstall on 6.17.0-14-generic/x86_64 failed <span class="hljs-keyword">for</span> nvidia(10)
</code></pre>
<p>There it is. My NVIDIA driver (version 575) could not compile against kernel 6.17. When Ubuntu installs a new kernel, it uses DKMS to rebuild all third-party drivers (like NVIDIA) for that kernel. NVIDIA's build failed. That failure broke the entire kernel installation chain. And that left me with a kernel that was half-installed and couldn't boot.</p>
<p>This is actually super common. Ubuntu's HWE (Hardware Enablement) kernels move fast. NVIDIA drivers don't always keep up. The new kernel lands, NVIDIA can't build against it, and boom, broken boot.</p>
<hr />
<h2 id="heading-the-fix">The Fix</h2>
<p>Since this is a server, I don't need the bleeding edge kernel. Stability matters more. So the plan was simple. Remove the broken 6.17 kernel and stick with 6.14 which works perfectly.</p>
<p>But there was a small headache. Dependency chain. Package A depends on Package B which depends on Package C. You can't just remove one. You have to go in order.</p>
<p>The trick is to remove the meta-packages first, then the actual kernel:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># This one goes first. Breaks the dependency chain</span>
sudo dpkg --force-remove-reinstreq --purge linux-generic-hwe-24.04

<span class="hljs-comment"># Now these will work</span>
sudo dpkg --force-remove-reinstreq --purge linux-headers-generic-hwe-24.04
sudo dpkg --force-remove-reinstreq --purge linux-image-generic-hwe-24.04
sudo dpkg --force-remove-reinstreq --purge linux-headers-6.17.0-14-generic
sudo dpkg --force-remove-reinstreq --purge linux-image-6.17.0-14-generic
</code></pre>
<p>Then clean up:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Fix any leftover broken state</span>
sudo apt --fix-broken install

<span class="hljs-comment"># Remove old kernel configs that are just sitting around</span>
dpkg --list | grep <span class="hljs-string">"^rc"</span> | awk <span class="hljs-string">'{print $2}'</span> | xargs sudo dpkg --purge

<span class="hljs-comment"># Update the boot menu</span>
sudo update-grub
</code></pre>
<p>And one more important step. Stop Ubuntu from pulling kernel 6.17 again on the next update:</p>
<pre><code class="lang-bash">sudo apt-mark hold linux-image-generic-hwe-24.04 linux-headers-generic-hwe-24.04 linux-generic-hwe-24.04
</code></pre>
<p>Reboot. Clean boot. Everything works. Server is back.</p>
<p>Later, when NVIDIA releases a driver that supports kernel 6.17, I can unhold and upgrade:</p>
<pre><code class="lang-bash">sudo apt-mark unhold linux-image-generic-hwe-24.04 linux-headers-generic-hwe-24.04 linux-generic-hwe-24.04
sudo apt upgrade
</code></pre>
<hr />
<h2 id="heading-things-i-want-you-to-remember">Things I Want You to Remember</h2>
<p><strong>Always keep at least two kernels.</strong> Ubuntu does this by default. Don't mess with it. That old kernel is your backup plan when things go wrong.</p>
<p><strong>Learn the GRUB menu.</strong> Hold <code>Shift</code> during boot. It lets you pick which kernel to boot. This one trick has saved me more times than I can count.</p>
<p><strong>NVIDIA and new kernels don't always get along.</strong> On a server, ask yourself. Do I really need the latest HWE kernel? If not, stick with what works.</p>
<p><strong>Know your dpkg flags.</strong></p>
<pre><code class="lang-bash">ii  = Installed and configured (all good)
iF  = Installed but FAILED to configure (this is your problem)
rc  = Removed but config files still there (needs cleanup)
</code></pre>
<p><code>unknown-block(0,0)</code> is not a dead disk. It means the kernel doesn't have the drivers to see your disk. The initramfs is broken. Boot an older kernel and fix it from there.</p>
<p><strong>The power drain trick works.</strong> No display at all? Unplug everything, hold the power button for 20 seconds, wait a while. It's not broscience. Capacitors really do hold charge that can cause weird behavior.</p>
<hr />
<h2 id="heading-quick-summary">Quick Summary</h2>
<p>Server wouldn't start. No display, fixed by draining power and waiting. Then Then kernel panic, caused by NVIDIA driver failing to build against kernel 6.17. Fixed by booting older kernel from GRUB, removing the broken kernel packages, and holding HWE updates until NVIDIA catches up.</p>
<p>Day well spent? Debatable. But hey, at least the server is running and I got a blog post out of it.</p>
<p>Tyastai haina ra?(Nepali😁) You always learn the most when things break.</p>
<hr />
<p><em>If you hit something similar, hope this saves you some hours. And no, reinstalling Ubuntu is never the answer (I’ve done that a lot). I don't do that here. I debug.</em></p>
]]></content:encoded></item><item><title><![CDATA[SSH Port Knocking: Knock Knock. Who's There? Not the Bots.]]></title><description><![CDATA[I was casually checking one of my server logs one evening(around 9:30 PM😁) and saw something annoying. Hundreds of failed SSH login attempts. From Nepal, China, Russia, Brazil, everywhere. Bots were hammering my SSH door (port 22), as it owed them m...]]></description><link>https://blog.budhathokisagar.com.np/ssh-port-knocking-knock-knock-whos-there-not-the-bots</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/ssh-port-knocking-knock-knock-whos-there-not-the-bots</guid><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Mon, 19 Jan 2026 12:42:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768826388160/a8de1510-ca28-4376-a6b9-37a95d14672b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was casually checking one of my server logs one evening(around 9:30 PM😁) and saw something annoying. Hundreds of failed SSH login attempts. From Nepal, China, Russia, Brazil, everywhere. Bots were hammering my SSH door (port 22), as it owed them money.</p>
<pre><code class="lang-bash">$ tail -f /var/<span class="hljs-built_in">log</span>/auth.log
Jan 18 10:23:45 server sshd: Failed password <span class="hljs-keyword">for</span> invalid user admin from 185.234.x.x
Jan 18 10:23:47 server sshd: Failed password <span class="hljs-keyword">for</span> root from 103.45.x.x
Jan 18 10:23:48 server sshd: Failed password <span class="hljs-keyword">for</span> invalid user <span class="hljs-built_in">test</span> from 45.227.x.x
</code></pre>
<p>Now, I use key-based authentication, so these bots weren't getting in. But still, watching strangers knock on your door all day is not fun. So I went down a rabbit hole and found this old-school technique called <a target="_blank" href="https://help.ubuntu.com/community/PortKnocking">port knocking</a>.</p>
<p>Honestly, it felt like something from a spy movie. And it really works.</p>
<hr />
<h2 id="heading-so-what-is-port-knocking">So What is Port Knocking?</h2>
<p>Imagine your server is like a hidden bar during prohibition. The door looks just like a regular wall with no signs or handles. But if you knock in a secret pattern - three knocks, pause, two knocks - the door suddenly appears and lets you in.</p>
<p>That's port knocking.</p>
<p>Your SSH port stays closed. Completely invisible to anyone scanning. But when you knock on a secret sequence of ports in the right order, the firewall opens SSH just for your IP.</p>
<pre><code class="lang-plaintext">1. You knock on port 7777
2. You knock on port 8888  
3. You knock on port 9999
4. Server goes "ah, it's you!" and opens SSH
5. You connect normally
6. When done, knock the reverse sequence to close it again
</code></pre>
<p>A little daemon called <code>knockd</code> listens for these patterns. When it sees the right sequence from your IP, it runs an iptables command to let you through. Everyone else just sees a closed port.</p>
<hr />
<h2 id="heading-lets-set-it-up">Let's Set It Up</h2>
<p>I'm on Ubuntu, but this works on most Linux distributions with iptables. You'll need sudo access and ideally a cup of tea/coffee/lemon-tea (not any other beverage😁) because we're doing this step by step.</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>Before we dive in, make sure you have:</p>
<ul>
<li><p>A Linux server (Ubuntu/Debian preferred, but any distro with iptables works)</p>
</li>
<li><p>Sudo or root access</p>
</li>
<li><p>SSH access to your server (obviously)</p>
</li>
<li><p>A second terminal is ready (trust me on this one)</p>
</li>
<li><p>Console access from your cloud provider as backup ( AWS, DigitalOcean, GCP, Vultr - they all have it)</p>
</li>
</ul>
<blockquote>
<p>That last point is important. If something goes wrong and you lock yourself out, console access is your "break glass in case of emergency" option. Don't skip setting this up.</p>
</blockquote>
<p>Alright, let's do this.</p>
<h3 id="heading-step-1-move-ssh-to-a-different-port">Step 1: Move SSH to a Different Port</h3>
<p>First things first. Let's get SSH off port 22. Every bot and their grandfather scans port 22. And here we're gonna be moving to a non-standard port.</p>
<pre><code class="lang-bash">sudo vim /etc/ssh/sshd_config
</code></pre>
<p>Find the Port line and change it:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Change this</span>
<span class="hljs-comment">#Port 22</span>

<span class="hljs-comment"># To this</span>
Port 13022
</code></pre>
<p>Why <strong>13022</strong>? The 13th is my birthday date, unlucky for attackers, and 022 reminds me it's SSH. Pick whatever you like, just avoid obvious ones like 2222, 22222, etc.</p>
<p><strong>Restart SSH:</strong></p>
<pre><code class="lang-bash">sudo systemctl restart sshd
</code></pre>
<p>Now here's the important part. Don't close your current session. Open a new terminal and test:</p>
<pre><code class="lang-bash">ssh -p 13022 user@your-server-ip
</code></pre>
<p>If you can connect, great. If not, you still have your old session to fix things. Don't skip this step, or you might lock yourself out. Ask me how I know😞.</p>
<h3 id="heading-step-2-install-knockd">Step 2: Install knockd</h3>
<pre><code class="lang-bash">sudo apt update
sudo apt install knockd -y
</code></pre>
<h3 id="heading-step-3-configure-the-knock-sequence">Step 3: Configure the Knock Sequence</h3>
<p>Open the config:</p>
<pre><code class="lang-bash">sudo vim /etc/knockd.conf
</code></pre>
<p>Replace everything with:</p>
<pre><code class="lang-ini"><span class="hljs-section">[options]</span>
    UseSyslog
    <span class="hljs-attr">LogFile</span> = /var/log/knockd.log

<span class="hljs-section">[openSSH]</span>
    <span class="hljs-attr">sequence</span>    = <span class="hljs-number">7777</span>,<span class="hljs-number">8888</span>,<span class="hljs-number">9999</span>
    <span class="hljs-attr">seq_timeout</span> = <span class="hljs-number">10</span>
    <span class="hljs-attr">command</span>     = /sbin/iptables -I INPUT -s %IP% -p tcp --dport <span class="hljs-number">13022</span> -j ACCEPT
    <span class="hljs-attr">tcpflags</span>    = syn

<span class="hljs-section">[closeSSH]</span>
    <span class="hljs-attr">sequence</span>    = <span class="hljs-number">9999</span>,<span class="hljs-number">8888</span>,<span class="hljs-number">7777</span>
    <span class="hljs-attr">seq_timeout</span> = <span class="hljs-number">10</span>
    <span class="hljs-attr">command</span>     = /sbin/iptables -D INPUT -s %IP% -p tcp --dport <span class="hljs-number">13022</span> -j ACCEPT
    <span class="hljs-attr">tcpflags</span>    = syn
</code></pre>
<p>Quick breakdown:</p>
<ul>
<li><p><code>sequence</code> - your secret knock pattern</p>
</li>
<li><p><code>seq_timeout</code> - you have 10 seconds to complete the knock</p>
</li>
<li><p><code>command</code> - the iptables magic that runs when you knock correctly</p>
</li>
<li><p><code>%IP%</code> - automatically replaced with your IP</p>
</li>
<li><p><code>tcpflags = syn</code> - only listens for connection attempts</p>
</li>
</ul>
<p>I'm using 7777, 8888, 9999 for this tutorial. For your actual server, pick something less obvious. Maybe your lucky numbers or a birthday. Just don't use 1234, 5678 - you get the idea.</p>
<h3 id="heading-step-4-tell-knockd-to-actually-start">Step 4: Tell knockd to Actually Start</h3>
<p>By default, knockd is installed but disabled. Let's fix that:</p>
<pre><code class="lang-bash">sudo vim /etc/default/knockd
</code></pre>
<p>Change these:</p>
<pre><code class="lang-bash">START_KNOCKD=1
KNOCKD_OPTS=<span class="hljs-string">"-i eth0"</span>
</code></pre>
<p>For the interface name, check yours:</p>
<pre><code class="lang-bash">ip addr show
</code></pre>
<p>Look for the one with your public IP. Usually <code>eth0</code> or <code>ens3</code> or something similar.</p>
<p>Fire it up:</p>
<pre><code class="lang-bash">sudo systemctl start knockd
sudo systemctl <span class="hljs-built_in">enable</span> knockd
</code></pre>
<h3 id="heading-step-5-lock-the-door">Step 5: Lock the Door</h3>
<p>Now the fun part. We block SSH by default, so only the knock can open it.</p>
<p>If you're using UFW, disable it first:</p>
<pre><code class="lang-bash">sudo ufw <span class="hljs-built_in">disable</span>
</code></pre>
<p>We're using raw iptables here. You can also use the UFW command as explained here - <a target="_blank" href="https://help.ubuntu.com/community/PortKnocking">Link</a>.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># This keeps your current session alive - very important!</span>
sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

<span class="hljs-comment"># Allow localhost</span>
sudo iptables -A INPUT -i lo -j ACCEPT

<span class="hljs-comment"># Block SSH by default</span>
sudo iptables -A INPUT -p tcp --dport 13022 -j REJECT

<span class="hljs-comment"># Save the rules so they survive reboot</span>
sudo apt install iptables-persistent -y
sudo netfilter-persistent save
</code></pre>
<blockquote>
<p>That first rule is crucial. It tells iptables "don't kill existing connections." Without it, running the block rule would kick you out immediately. Not fun, SED LIFE!.</p>
</blockquote>
<p><strong>Do not close your current SSH session yet.</strong> Keep it open as a safety net while you test.</p>
<hr />
<h2 id="heading-testing-time">Testing Time</h2>
<p>On your local machine, install the knock client:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Ubuntu/Debian</span>
sudo apt install knockd

<span class="hljs-comment"># macOS</span>
brew install knock
</code></pre>
<p>First, let's confirm the port is actually blocked:</p>
<pre><code class="lang-bash">ssh -p 13022 user@your-server-ip
</code></pre>
<p>You should get "Connection refused." Good. The door is locked.</p>
<p>Now perform the secret knock:</p>
<pre><code class="lang-bash">knock -v your-server-ip 7777 8888 9999
</code></pre>
<p>You'll see:</p>
<pre><code class="lang-plaintext">hitting tcp your-server-ip:7777
hitting tcp your-server-ip:8888
hitting tcp your-server-ip:9999
</code></pre>
<p>Quickly connect:</p>
<pre><code class="lang-bash">ssh -p 13022 user@your-server-ip
</code></pre>
<p>And you're in.</p>
<p>When you're done, close the door behind you:</p>
<pre><code class="lang-bash">knock -v your-server-ip 9999 8888 7777
</code></pre>
<p>Notice the reverse order. 9999, 8888, 7777. That's your "close" sequence.</p>
<h3 id="heading-no-knock-client-telnet-works-too">No knock client? Telnet works too</h3>
<pre><code class="lang-bash">telnet your-server-ip 7777
<span class="hljs-comment"># Ctrl+C</span>
telnet your-server-ip 8888
<span class="hljs-comment"># Ctrl+C</span>
telnet your-server-ip 9999
<span class="hljs-comment"># Ctrl+C</span>
</code></pre>
<p>You'll see "Connection refused" each time, but that's fine. The knock packets still get sent.</p>
<hr />
<h2 id="heading-making-life-easier-with-a-script">Making Life Easier with a Script</h2>
<p>Typing the knock command every time gets old fast. Here's a simple script:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
<span class="hljs-comment"># knock-ssh.sh</span>

SERVER=<span class="hljs-string">"your-server-ip"</span>
SSH_PORT=<span class="hljs-string">"13022"</span>
USER=<span class="hljs-string">"your-username"</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Performing secret knock..."</span>
knock <span class="hljs-variable">$SERVER</span> 7777 8888 9999

sleep 1

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Opening the door..."</span>
ssh -p <span class="hljs-variable">$SSH_PORT</span> <span class="hljs-variable">$USER</span>@<span class="hljs-variable">$SERVER</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Closing the door..."</span>
knock <span class="hljs-variable">$SERVER</span> 9999 8888 7777
</code></pre>
<p>Save it, make it executable:</p>
<pre><code class="lang-bash">chmod +x knock-ssh.sh
./knock-ssh.sh
</code></pre>
<p>One command to rule them all.</p>
<hr />
<h2 id="heading-watching-the-magic-happen">Watching the Magic Happen</h2>
<p>Want to see <code>Knockd</code> in action? Check the logs:</p>
<pre><code class="lang-bash">sudo tail -f /var/<span class="hljs-built_in">log</span>/knockd.log
</code></pre>
<p>When you knock correctly:</p>
<pre><code class="lang-plaintext">[2025-01-18 10:45:23] 203.0.113.50: openSSH: Stage 1
[2025-01-18 10:45:23] 203.0.113.50: openSSH: Stage 2
[2025-01-18 10:45:24] 203.0.113.50: openSSH: Stage 3
[2025-01-18 10:45:24] 203.0.113.50: openSSH: OPEN SESAME
[2025-01-18 10:45:24] openSSH: running command: /sbin/iptables -I INPUT -s 203.0.113.50 -p tcp --dport 13022 -j ACCEPT
</code></pre>
<p>"OPEN SESAME" - I didn't make that up, knockd actually says that. Whoever wrote this daemon had a sense of humor like mine!</p>
<hr />
<h2 id="heading-a-few-things-to-keep-in-mind">A Few Things to Keep in Mind</h2>
<p>Port knocking is clever, but it's not magic. Some limitations:</p>
<ul>
<li><p>If someone is watching your network traffic, they can see the knock sequence.</p>
</li>
<li><p>Doesn't play well with CI/CD or automated deployments</p>
</li>
<li><p>If <code>Knockd</code> crashes, you might lock yourself out😔</p>
</li>
</ul>
<p>This is perfect for personal servers, homelabs, and dev boxes. For production with a team, you probably want a VPN or something like Teleport.</p>
<p>Some tips from experience:</p>
<ul>
<li><p>Always have console access as backup. Cloud providers have web-based consoles. Use them if you lock yourself out.</p>
</li>
<li><p>Don't use sequential ports like 1000, 2000, 3000. Too easy to guess.</p>
</li>
<li><p>Keep your knock sequence to yourself.</p>
</li>
<li><p>Still use key-based SSH authentication. Port knocking is an extra layer, not a replacement.</p>
</li>
</ul>
<hr />
<h2 id="heading-quick-reference">Quick Reference</h2>
<pre><code class="lang-bash"><span class="hljs-comment"># Install</span>
sudo apt install knockd

<span class="hljs-comment"># Config file</span>
/etc/knockd.conf

<span class="hljs-comment"># Enable on boot</span>
sudo vim /etc/default/knockd  <span class="hljs-comment"># Set START_KNOCKD=1</span>

<span class="hljs-comment"># Start the service</span>
sudo systemctl <span class="hljs-built_in">enable</span> --now knockd

<span class="hljs-comment"># Knock from client</span>
knock -v server-ip port1 port2 port3

<span class="hljs-comment"># Check logs</span>
sudo tail -f /var/<span class="hljs-built_in">log</span>/knockd.log
</code></pre>
<hr />
<p>That's it. Your SSH server is now basically invisible. Port scanners see nothing. Bots find nothing to attack. Only you with the secret knock can get in.</p>
<p>My auth.log has been so much quieter since I set this up. No more watching random IPs fail to guess "admin" as a username fifty times a minute.</p>
<p>Hope this saves you some headaches, too.</p>
<p>#devops #security #linux #ssh #networking</p>
<p><strong>Thanks! JADAU!</strong></p>
<h3 id="heading-but-wait-theres-more-bonus">But Wait... There's More! (Bonus)</h3>
<p>Port knocking works at the firewall level. You can use it for any port:</p>
<ul>
<li><p>PostgreSQL (5432)</p>
</li>
<li><p>MySQL (3306)</p>
</li>
<li><p>Redis (6379)</p>
</li>
<li><p>Admin panels</p>
</li>
<li><p>Internal APIs</p>
</li>
<li><p>Literally anything</p>
</li>
</ul>
<p>You just add more blocks in <code>/etc/knockd.conf</code>:</p>
<pre><code class="lang-ini"><span class="hljs-section">[openPostgres]</span>
    <span class="hljs-attr">sequence</span>    = <span class="hljs-number">5555</span>,<span class="hljs-number">6666</span>,<span class="hljs-number">7777</span>
    <span class="hljs-attr">seq_timeout</span> = <span class="hljs-number">10</span>
    <span class="hljs-attr">command</span>     = /sbin/iptables -I INPUT -s %IP% -p tcp --dport <span class="hljs-number">5432</span> -j ACCEPT
    <span class="hljs-attr">tcpflags</span>    = syn

<span class="hljs-section">[closePostgres]</span>
    <span class="hljs-attr">sequence</span>    = <span class="hljs-number">7777</span>,<span class="hljs-number">6666</span>,<span class="hljs-number">5555</span>
    <span class="hljs-attr">seq_timeout</span> = <span class="hljs-number">10</span>
    <span class="hljs-attr">command</span>     = /sbin/iptables -D INPUT -s %IP% -p tcp --dport <span class="hljs-number">5432</span> -j ACCEPT
    <span class="hljs-attr">tcpflags</span>    = syn
</code></pre>
<p>Different knock sequences for each service.</p>
]]></content:encoded></item><item><title><![CDATA[Solve Mysterious MacOS Disk Usage: List and Delete APFS Snapshots]]></title><description><![CDATA[I woke up to a full disk panic: Finder said I had almost no space. A quick dive into the terminal revealed something surprising… hundreds of gigabytes tucked away inside /System.
If you encounter mysterious missing space after updates, APFS snapshots...]]></description><link>https://blog.budhathokisagar.com.np/macos-cleanup-unused-storage</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/macos-cleanup-unused-storage</guid><category><![CDATA[APFS snapshots]]></category><category><![CDATA[Disk Cleanup]]></category><category><![CDATA[macOS]]></category><category><![CDATA[developer tips]]></category><category><![CDATA[System administration]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Thu, 28 Aug 2025 12:01:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756382623419/9fa6a2b6-9d95-465d-94d2-9fbcd6384219.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I woke up to a full disk panic: Finder said I had almost no space. A quick dive into the terminal revealed something surprising… hundreds of gigabytes tucked away inside /System.</p>
<p>If you encounter mysterious missing space after updates, APFS snapshots are a common suspect. Here’s a clear, safe walkthrough I used to find the culprit and get my machine back to normal.</p>
<h2 id="heading-why-does-this-happen-short">Why does this happen? (short)</h2>
<p>APFS supports snapshots - point-in-time copies of the filesystem. macOS creates them during system updates (and Time Machine uses them too). Normally, they are cleaned up automatically, but sometimes old snapshots remain and quietly consume space.</p>
<h2 id="heading-quick-diagnosis-where-is-the-disk-space-going">Quick diagnosis: Where is the disk space going?</h2>
<p>Run this to see top-level disk usage (sudo may be required):</p>
<pre><code class="lang-bash">sudo du -hxd1 / | sort -hr | head -20
</code></pre>
<p>In my case, the output showed:</p>
<pre><code class="lang-bash">218G    /System
</code></pre>
<p>That was the red flag. Drill into /System:</p>
<pre><code class="lang-bash">sudo du -hxd1 /System | sort -hr | head -20
</code></pre>
<p>I found <code>/System/Volumes</code> that consuming 140+ GB.</p>
<h2 id="heading-finding-apfs-snapshots">Finding APFS snapshots</h2>
<p>List snapshots on the root APFS volume with:</p>
<pre><code class="lang-bash">sudo diskutil apfs listSnapshots /
</code></pre>
<p>This prints snapshots like <a target="_blank" href="http://com.apple"><code>com.apple</code></a><code>.os.update-YYYY-...</code> along with UUIDs. Those <a target="_blank" href="http://com.apple"><code>com.apple</code></a><code>.os.update-*</code> entries were left behind after macOS updates in my case.</p>
<p>Example output snippet:</p>
<pre><code class="lang-bash">Snapshots <span class="hljs-keyword">for</span> APFS Volume with UUID: XXXXX-XXXX-...
Snapshot name: com.apple.os.update-2023-07-12-123456
Snapshot UUID: 01234567-89AB-CDEF-0123-456789ABCDEF
... (more snapshots)
</code></pre>
<h2 id="heading-deleting-snapshots-safe-approach">Deleting snapshots (safe approach)</h2>
<p>To remove a single snapshot:</p>
<pre><code class="lang-bash">sudo diskutil apfs deleteSnapshot / -uuid &lt;SNAPSHOT-UUID&gt;
</code></pre>
<p>Replace <code>&lt;SNAPSHOT-UUID&gt;</code> with the UUID from the list. Repeat for unwanted snapshots. Deleting update snapshots is safe if the system is fully booted into the new OS and you don't plan to roll back.</p>
<p>If snapshots refuse to go away or reference in-use files, try one of the following:</p>
<ul>
<li><p>Reboot into Safe Mode and try again.</p>
</li>
<li><p>Boot to macOS Recovery and use Terminal from there to delete stubborn snapshots.</p>
</li>
</ul>
<p>Note: on APFS, snapshots are a filesystem feature… deleting them frees the space they were holding, but if you're unsure, booting into Recovery is the safest route.</p>
<h2 id="heading-optional-sweep-caches-and-developer-junk">Optional: sweep caches and developer junk</h2>
<p>After removing large snapshots, you can reclaim even more space by cleaning developer caches and unused container data. Use these commands carefully — they remove caches and unused Docker images/containers.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Homebrew cleanup</span>
brew cleanup

<span class="hljs-comment"># Docker: remove all unused containers, networks, images (dangling and unreferenced)</span>
docker system prune -af

<span class="hljs-comment"># Remove user cache (be careful — this deletes files in ~/.cache)</span>
rm -rf ~/.cache/*

<span class="hljs-comment"># npm cache cleanup</span>
npm cache clean --force
</code></pre>
<p>I ran these and got another chunk of space back.</p>
<h2 id="heading-mini-caseexample-what-i-saw-real-numbers">Mini case/example — what I saw (real numbers)</h2>
<ul>
<li><p>Free space before: ~5 GB</p>
</li>
<li><p>Found: /System ~218 GB, /System/Volumes ~140+ GB</p>
</li>
<li><p>Action: listed APFS snapshots, removed several <a target="_blank" href="http://com.apple"><code>com.apple</code></a><code>.os.update-*</code> snapshots, then ran cache and Docker cleanup</p>
</li>
<li><p>Free space after: ~90+ GB</p>
</li>
</ul>
<p>This saved me from a full macOS reinstall.</p>
<h2 id="heading-sharp-edges-amp-gotchas">Sharp edges &amp; gotchas</h2>
<ul>
<li><p>APFS snapshots are powerful: deleting an update snapshot removes the ability to roll back to that update. Only delete snapshots when you're confident you won't need to revert.</p>
</li>
<li><p>If macOS or Time Machine is actively using snapshots, the system may prevent deletion until you boot to Recovery or Safe Mode.</p>
</li>
<li><p>Always double-check UUIDs before deleting. A typo can remove the wrong snapshot.</p>
</li>
<li><p><code>rm -rf ~/.cache/*</code> is destructive: back up anything you care about before mass-deleting caches.</p>
</li>
<li><p>On FileVault-encrypted systems, be extra careful with disk utilities. If in doubt, boot into Recovery and work from there.</p>
</li>
</ul>
<h2 id="heading-extra-tips">Extra tips</h2>
<ul>
<li><p>Regular maintenance: run <code>brew cleanup</code> occasionally and prune Docker images you no longer need.</p>
</li>
<li><p>Keep an eye on snapshots after system updates for a few days: sometimes automatic cleanup lags.</p>
</li>
<li><p>If you use Time Machine, local snapshots can also consume space - <code>tmutil listlocalsnapshots /</code> shows local Time Machine snapshots.</p>
</li>
</ul>
<h2 id="heading-key-takeaways">Key Takeaways</h2>
<ul>
<li><p>APFS snapshots can silently hog hundreds of GB — check them when disk usage looks wrong.</p>
</li>
<li><p>Diagnose with <code>sudo du -hxd1 /</code> and <code>sudo diskutil apfs listSnapshots /</code>.</p>
</li>
<li><p>Delete snapshots with <code>sudo diskutil apfs deleteSnapshot / -uuid &lt;UUID&gt;</code>; use Recovery Mode if needed.</p>
</li>
<li><p>Clean developer caches and Docker artifacts (<code>brew cleanup</code>, <code>docker system prune -af</code>, <code>npm cache clean --force</code>) for extra space.</p>
</li>
<li><p>Always double-check what you delete and keep backups for critical data.</p>
</li>
</ul>
<p>If you're in Kathmandu or anywhere in the world and see a surprising /System volume, this process should help you breathe easy again… saved me from reinstalling macOS. Jadau! 🎉</p>
]]></content:encoded></item><item><title><![CDATA[Updated: This is a test article using MCP server for hashnode]]></title><description><![CDATA[This is a test article created using MCP (Multimodal Conversational Platform) server integration with Hashnode.
What is MCP?
MCP allows AI assistants like Claude to interact directly with platforms such as Hashnode, making it possible to create, edit...]]></description><link>https://blog.budhathokisagar.com.np/updated-this-is-a-test-article-using-mcp-server-for-hashnode</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/updated-this-is-a-test-article-using-mcp-server-for-hashnode</guid><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Fri, 02 May 2025 10:38:23 GMT</pubDate><content:encoded><![CDATA[<p>This is a test article created using MCP (Multimodal Conversational Platform) server integration with Hashnode.</p>
<h2 id="heading-what-is-mcp">What is MCP?</h2>
<p>MCP allows AI assistants like Claude to interact directly with platforms such as Hashnode, making it possible to create, edit, and publish content through conversation.</p>
<h2 id="heading-testing-features">Testing Features</h2>
<p>This article demonstrates the integration capabilities between an AI assistant and Hashnode's publishing platform. Through this integration, users can:</p>
<ul>
<li>Create new articles</li>
<li>Add appropriate tags</li>
<li>Format content with markdown</li>
<li>Publish or save as drafts</li>
</ul>
<h2 id="heading-next-steps">Next Steps</h2>
<p>If this test is successful, we can explore more advanced content creation workflows using this integration.</p>
<p>Thank you for testing this functionality!</p>
]]></content:encoded></item><item><title><![CDATA[I Built an MCP Server for Hashnode]]></title><description><![CDATA[Introduction
In the rapidly evolving landscape of AI tools and integrations, the ability to extend AI capabilities through custom interfaces has become increasingly valuable. Today, I'm excited to share a project I've been working on: the Hashnode MC...]]></description><link>https://blog.budhathokisagar.com.np/mcp-server-for-hashnode</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/mcp-server-for-hashnode</guid><category><![CDATA[AI]]></category><category><![CDATA[mcp]]></category><category><![CDATA[llm]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Thu, 01 May 2025 20:20:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746160370771/f2a6f097-173c-4ab8-8f87-74711175eddc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>In the rapidly evolving landscape of AI tools and integrations, the ability to extend AI capabilities through custom interfaces has become increasingly valuable. Today, I'm excited to share a project I've been working on: the <strong>Hashnode MCP Server</strong>. This tool bridges the gap between AI assistants and the Hashnode blogging platform, enabling seamless content creation, management, and retrieval directly through AI interactions.</p>
<p>In this article, I'll walk you through what the Model Context Protocol (MCP) is, how my Hashnode MCP server works, and how you can set it up to enhance your own content workflow.</p>
<h2 id="heading-demo-first"><strong>Demo First? (😁)</strong></h2>
<ul>
<li><p><strong><em>Create Article:</em></strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746192652653/c0c6b524-1cf7-4bcb-8bab-d31cdf555a90.gif" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Update Article</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746192608690/13289718-a80c-4547-950e-6fdcc28ac55c.gif" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h2 id="heading-what-is-the-model-context-protocol-mcp">What is the Model Context Protocol (MCP)?</h2>
<p>The Model Context Protocol (MCP) is a framework that allows AI models to interact with external tools and data sources. It provides a standardized way for AI assistants to access additional capabilities beyond their built-in functions.</p>
<p>MCP servers act as intermediaries between AI models and external services, exposing a set of tools and resources that the AI can use to perform specific tasks. This extends what AI assistants can do without requiring them to have direct API access to every possible service.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746159518471/62cd27c2-7e03-4096-9c71-acf0906ab66d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-introducing-the-hashnode-mcp-server">Introducing the Hashnode MCP Server</h2>
<p>The Hashnode MCP Server is a Python-based implementation that connects AI assistants to the Hashnode API. It allows AI models to perform various operations on Hashnode blogs, including:</p>
<ul>
<li><p>Creating and publishing new articles</p>
</li>
<li><p>Updating existing articles</p>
</li>
<li><p>Searching for articles by keywords</p>
</li>
<li><p>Retrieving article details</p>
</li>
<li><p>Getting user information</p>
</li>
<li><p>Fetching the latest articles from a publication</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746159560516/4cac2819-9d5a-4c05-905e-ffe03198154b.png" alt class="image--center mx-auto" /></p>
<p>This means that with the Hashnode MCP Server, you can ask an AI assistant to draft a blog post, publish it to your Hashnode blog, search for related content, or update existing articles—all without leaving your conversation with the AI.</p>
<h2 id="heading-how-the-hashnode-mcp-server-works">How the Hashnode MCP Server Works</h2>
<p>At its core, the Hashnode MCP Server is a bridge between AI assistants and the Hashnode GraphQL API. Here's how it works:</p>
<ol>
<li><p><strong>Connection</strong>: The MCP server establishes a connection with both the AI assistant and the Hashnode API.</p>
</li>
<li><p><strong>Tool Exposure</strong>: It exposes a set of tools that represent different Hashnode operations.</p>
</li>
<li><p><strong>Request Handling</strong>: When the AI assistant wants to perform an action, it sends a request to the MCP server.</p>
</li>
<li><p><strong>API Interaction</strong>: The server translates this request into the appropriate GraphQL query or mutation for the Hashnode API.</p>
</li>
<li><p><strong>Response Formatting</strong>: After receiving a response from Hashnode, the server formats it in a way that's easy for the AI to understand and present to the user.</p>
</li>
</ol>
<p>The server is built using the FastMCP framework, which simplifies the process of creating MCP servers by handling the communication protocol details.</p>
<h2 id="heading-setting-up-the-hashnode-mcp-server">Setting Up the Hashnode MCP Server</h2>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li><p>Python 3.8 or higher</p>
</li>
<li><p>A Hashnode account with a personal access token</p>
</li>
<li><p>Basic familiarity with command-line operations</p>
</li>
</ul>
<h3 id="heading-installation-steps">Installation Steps</h3>
<ol>
<li><p><strong>Clone the repository</strong>:</p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/sbmagar13/hashnode-mcp-server.git
 <span class="hljs-built_in">cd</span> hashnode-mcp-server
</code></pre>
</li>
<li><p><strong>Create a virtual environment</strong>:</p>
<pre><code class="lang-bash"> python -m venv .venv
 <span class="hljs-built_in">source</span> .venv/bin/activate  <span class="hljs-comment"># On Windows: .venv\Scripts\activate</span>
</code></pre>
</li>
<li><p><strong>Install dependencies</strong>:</p>
<pre><code class="lang-bash"> pip install -r requirements.txt
</code></pre>
</li>
<li><p><strong>Set up environment variables</strong>: Create a <code>.env</code> file in the project root with the following content:</p>
<pre><code class="lang-bash"> HASHNODE_API_URL=https://gql.hashnode.com
 HASHNODE_PERSONAL_ACCESS_TOKEN=your_personal_access_token
</code></pre>
<p> Replace <code>your_personal_access_token</code> with your actual Hashnode personal access token, which you can generate in your Hashnode account settings.</p>
</li>
<li><p><strong>Run the server</strong>:</p>
<p> You have two options for running the server:</p>
<p> <strong>Option 1</strong>: Run the server manually</p>
<pre><code class="lang-bash"> python run_server.py
</code></pre>
<p> Or directly using the root file:</p>
<pre><code class="lang-bash"> python mcp_server.py
</code></pre>
<p> The server will start and listen for connections from AI assistants. By default, it runs on <a target="_blank" href="http://localhost:8000"><code>localhost:8000</code></a>.</p>
<p> <strong>Option 2</strong>: Let the MCP integration handle it automatically (<strong><em>I’ll be using this</em></strong>)</p>
<p> When properly configured in Claude Desktop or Cline VSCode extension, the MCP integration will automatically start and manage the server process for you.</p>
</li>
</ol>
<h3 id="heading-important-note-on-file-structure"><strong>Important</strong> Note on File Structure</h3>
<p>When configuring your MCP server in Claude Desktop or Cline VSCode extension, you should use the root <code>mcp_</code><a target="_blank" href="http://server.py"><code>server.py</code></a> file directly rather than the files in the <code>hashnode_mcp</code> directory. The <code>hashnode_mcp</code> directory is primarily for packaging purposes.</p>
<p>For example, in your configuration, point to:</p>
<pre><code class="lang-bash">/path/to/your/hashnode-mcp-server/mcp_server.py
</code></pre>
<p>And not:</p>
<pre><code class="lang-bash">/path/to/your/hashnode-mcp-server/hashnode_mcp/mcp_server.py
</code></pre>
<p>This ensures you're using the most up-to-date version of the server with all features enabled. The root <code>mcp_</code><a target="_blank" href="http://server.py"><code>server.py</code></a> file contains all the necessary functionality and doesn't require the package structure to operate correctly.</p>
<h2 id="heading-using-the-hashnode-mcp-server-with-ai-assistants">Using the Hashnode MCP Server with AI Assistants</h2>
<p>Once your server is configured, you can connect compatible AI assistants to it. Unlike traditional API integrations that use URLs, MCP servers are typically configured directly in the AI assistant's configuration files, as we'll see in the next section.</p>
<p>The connection process generally involves:</p>
<ol>
<li><p>Setting up the configuration file for your AI assistant (Claude Desktop or Cline VSCode extension)</p>
</li>
<li><p>Specifying the path to your Python interpreter and the MCP server script</p>
</li>
<li><p>Providing necessary environment variables like your Hashnode personal access token</p>
</li>
</ol>
<p>After configuring the connection, you can start giving the AI commands related to your Hashnode blog. For example:</p>
<ul>
<li><p>"Create a new article about Python programming tips"</p>
</li>
<li><p>"Update my article with ID 12345 to fix the code examples"</p>
</li>
<li><p>"Get the latest articles from my blog"</p>
</li>
<li><p>"Search for articles about machine learning"</p>
</li>
</ul>
<p>The AI will use the MCP server to execute these commands and return the results.</p>
<h2 id="heading-configuring-mcp-on-claude-desktop-and-cline-vscode-extension">Configuring MCP on Claude Desktop and Cline VSCode Extension</h2>
<p>To use your Hashnode MCP Server with Claude AI, you'll need to configure it in either Claude Desktop or the Cline VSCode extension. Here's how to set it up in both environments:</p>
<h3 id="heading-configuring-mcp-on-cline-vscode-extension">Configuring MCP on Cline VSCode Extension</h3>
<ol>
<li><p><strong>Open VS Code</strong> with the Cline extension installed.</p>
</li>
<li><p><strong>Navigate to the Cline MCP settings file</strong> located at:</p>
<ul>
<li><p>Windows: <code>%APPDATA%\Code\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json</code></p>
</li>
<li><p>macOS: <code>~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json</code></p>
</li>
<li><p>Linux: <code>Unfortunately, Claude Desktop is not available for Linux as of now (at the time of writing this article)</code> (So you can use VSCode Cline Extension instead)</p>
</li>
</ul>
</li>
<li><p><strong>Add your Hashnode MCP server configuration</strong> to the file:</p>
<pre><code class="lang-json"> {
   <span class="hljs-attr">"mcpServers"</span>: {
     <span class="hljs-attr">"hashnode"</span>: {
       <span class="hljs-attr">"command"</span>: <span class="hljs-string">"/path/to/your/venv/bin/python"</span>,
       <span class="hljs-attr">"args"</span>: [
         <span class="hljs-string">"/path/to/your/hashnode-mcp-server/mcp_server.py"</span>  <span class="hljs-comment">// Use the root mcp_server.py file</span>
       ],
       <span class="hljs-attr">"env"</span>: {
         <span class="hljs-attr">"HASHNODE_PERSONAL_ACCESS_TOKEN"</span>: <span class="hljs-string">"your-personal-access-token"</span>
       }
     }
   }
 }
</code></pre>
<p> Note that the configuration points to the root <code>mcp_</code><a target="_blank" href="http://server.py"><code>server.py</code></a> file, not the one in the <code>hashnode_mcp</code> directory.</p>
</li>
<li><p><strong>Replace the paths and token</strong> with your actual values. For example:</p>
<pre><code class="lang-json"> {
   <span class="hljs-attr">"mcpServers"</span>: {
     <span class="hljs-attr">"hashnode"</span>: {
       <span class="hljs-attr">"command"</span>: <span class="hljs-string">"/Users/sagar/my_personal/hashnode-mcp-server/.venv/bin/python"</span>,
       <span class="hljs-attr">"args"</span>: [
         <span class="hljs-string">"/Users/sagar/my_personal/hashnode-mcp-server/mcp_server.py"</span>
       ],
       <span class="hljs-attr">"env"</span>: {
         <span class="hljs-attr">"HASHNODE_PERSONAL_ACCESS_TOKEN"</span>: <span class="hljs-string">"your-personal-access-token"</span>
       }
     }
   }
 }
</code></pre>
</li>
<li><p><strong>Save the file</strong> and restart VS Code or reload the window.</p>
</li>
<li><p><strong>Open a new Cline conversation</strong> and test the connection by asking it to interact with your Hashnode blog.</p>
</li>
</ol>
<h3 id="heading-configuring-mcp-on-claude-desktop">Configuring MCP on Claude Desktop</h3>
<ol>
<li><p><strong>Open Claude Desktop</strong> and navigate to the configuration file:</p>
<ul>
<li><p>Windows: <code>%APPDATA%\Claude\claude_desktop_config.json</code></p>
</li>
<li><p>macOS: <code>~/Library/Application Support/Claude/claude_desktop_config.json</code></p>
</li>
<li><p>Linux: <code>Unfortunately, Claude Desktop is not available for Linux as of now (at the time of writing this article)</code></p>
</li>
</ul>
</li>
<li><p><strong>Add your Hashnode MCP server configuration</strong> to the file, using the same format as for the Cline VSCode extension. Make sure to point to the root <code>mcp_</code><a target="_blank" href="http://server.py"><code>server.py</code></a> file:</p>
<pre><code class="lang-json"> {
   <span class="hljs-attr">"mcpServers"</span>: {
     <span class="hljs-attr">"hashnode"</span>: {
       <span class="hljs-attr">"command"</span>: <span class="hljs-string">"/path/to/your/venv/bin/python"</span>,
       <span class="hljs-attr">"args"</span>: [
         <span class="hljs-string">"/path/to/your/hashnode-mcp-server/mcp_server.py"</span>  <span class="hljs-comment">// Use the root mcp_server.py file</span>
       ],
       <span class="hljs-attr">"env"</span>: {
         <span class="hljs-attr">"HASHNODE_PERSONAL_ACCESS_TOKEN"</span>: <span class="hljs-string">"your-personal-access-token"</span>
       }
     }
   }
 }
</code></pre>
</li>
<li><p><strong>Save the file</strong> and restart Claude Desktop.</p>
</li>
<li><p><strong>Test the connection</strong> by asking Claude to perform a simple operation like "Get the latest articles from my Hashnode blog."</p>
</li>
</ol>
<h3 id="heading-troubleshooting-connection-issues">Troubleshooting Connection Issues</h3>
<p>If you encounter connection issues:</p>
<ol>
<li><p><strong>Verify the server is running</strong> by checking the terminal where you started the MCP server.</p>
</li>
<li><p><strong>Check the paths</strong> in your configuration are correct and point to the right Python interpreter and script.</p>
</li>
<li><p><strong>Ensure your environment variables</strong> are properly set, especially the Hashnode personal access token.</p>
</li>
<li><p><strong>Check the server logs</strong> for any error messages.</p>
</li>
<li><p><strong>Try restarting</strong> both the MCP server and the Claude application.</p>
</li>
</ol>
<h2 id="heading-example-creating-a-new-article">Example: Creating a New Article</h2>
<p>Let's walk through a practical example of using the Hashnode MCP Server to create and publish a new article:</p>
<ol>
<li><p><strong>Start the server</strong> as described above.</p>
</li>
<li><p><strong>Connect your AI assistant</strong> to the MCP server.</p>
</li>
<li><p><strong>Ask the AI to create an article</strong>:</p>
<pre><code class="lang-bash"> Create a new article titled <span class="hljs-string">"Getting Started with Python"</span> with the following content:

 <span class="hljs-comment"># Getting Started with Python</span>

 Python is one of the most popular programming languages today. In this article, we<span class="hljs-string">'ll explore the basics of Python and how to get started.

 ## Installation

 First, you need to install Python...

 [rest of the article content]

 Tags: python, programming, beginners</span>
</code></pre>
</li>
<li><p>The AI will use the MCP server to:</p>
<ul>
<li><p>Format the request for the Hashnode API</p>
</li>
<li><p>Send the creation request</p>
</li>
<li><p>Return the result, including the article ID and URL</p>
</li>
</ul>
</li>
<li><p>You can then ask the AI to:</p>
<ul>
<li><p>Publish the article immediately</p>
</li>
<li><p>Save it as a draft</p>
</li>
<li><p>Make further edits</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-advanced-features">Advanced Features</h2>
<h3 id="heading-timeout-handling">Timeout Handling</h3>
<p>The Hashnode MCP Server includes robust timeout handling for API requests. This is particularly important for operations like article creation and updates, which might take longer to process. If a request times out, the server provides helpful error messages and suggestions.</p>
<h3 id="heading-error-management">Error Management</h3>
<p>The server includes comprehensive error handling to provide clear feedback when issues occur. This makes troubleshooting easier and improves the user experience.</p>
<h3 id="heading-pagination-support">Pagination Support</h3>
<p>For operations that might return large amounts of data, like searching for articles, the server supports pagination to manage the response size and improve performance.</p>
<h2 id="heading-potential-use-cases">Potential Use Cases</h2>
<p>The Hashnode MCP Server opens up numerous possibilities for content creators:</p>
<ol>
<li><p><strong>Automated Content Creation</strong>: Generate draft articles based on outlines or topics.</p>
</li>
<li><p><strong>Content Management</strong>: Update, organize, and manage your blog without leaving your AI assistant.</p>
</li>
<li><p><strong>Research Assistance</strong>: Search your existing content to find relevant articles or avoid duplication.</p>
</li>
<li><p><strong>Batch Operations</strong>: Perform bulk updates or content audits across your blog.</p>
</li>
<li><p><strong>Integration with Workflows</strong>: Incorporate blog publishing into broader AI-assisted workflows.</p>
</li>
</ol>
<h2 id="heading-technical-architecture">Technical Architecture</h2>
<p>The project is organized with a clean, modular structure:</p>
<ul>
<li><p><code>mcp_</code><a target="_blank" href="http://server.py"><code>server.py</code></a>: Root server implementation that can be run directly</p>
</li>
<li><p><code>hashnode_mcp/</code>: Core package containing the modular functionality</p>
<ul>
<li><p><code>mcp_</code><a target="_blank" href="http://server.py"><code>server.py</code></a>: Package version of the server implementation</p>
</li>
<li><p><a target="_blank" href="http://utils.py"><code>utils.py</code></a>: Utility functions for formatting responses and GraphQL queries</p>
</li>
</ul>
</li>
<li><p><code>examples/</code>: Example usage scripts</p>
</li>
<li><p><code>tests/</code>: Test suite for verifying functionality</p>
</li>
<li><p><code>run_</code><a target="_blank" href="http://server.py"><code>server.py</code></a>: Entry point for running the server using the package version</p>
</li>
</ul>
<p>While the project includes a package structure (<code>hashnode_mcp/</code>) for organization and potential distribution, users can simply run the root <code>mcp_</code><a target="_blank" href="http://server.py"><code>server.py</code></a> file directly without needing to use the package. This provides flexibility in how you choose to deploy the server.</p>
<p>The server uses asynchronous programming with Python's <code>asyncio</code> and <code>httpx</code> libraries for efficient API communication. GraphQL queries and mutations are defined as constants, making them easy to maintain and update.</p>
<h2 id="heading-future-enhancements">Future Enhancements</h2>
<p>There are several exciting possibilities for future development:</p>
<ol>
<li><p><strong>Additional Hashnode Features</strong>: Support for more Hashnode API capabilities like managing comments, series, and newsletters.</p>
</li>
<li><p><strong>Analytics Integration</strong>: Retrieving and analyzing blog performance metrics.</p>
</li>
<li><p><strong>Content Optimization</strong>: AI-assisted SEO optimization for articles.</p>
</li>
<li><p><strong>Multi-User Support</strong>: Enhanced capabilities for team publications.</p>
</li>
<li><p><strong>Webhook Support</strong>: Responding to events from your Hashnode blog.</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The Hashnode MCP Server represents a powerful bridge between AI assistants and content creation on Hashnode. By enabling AI models to interact directly with your blog, it streamlines the writing and publishing process, making content creation more efficient and accessible.</p>
<p>Whether you're a solo blogger looking to optimize your workflow or part of a content team seeking to scale your production, this tool offers valuable capabilities for integrating AI into your content strategy.</p>
<p>I'm excited to see how others in the community will use and extend this project. The code is open-source and available on GitHub, so feel free to fork it, contribute, or adapt it to your specific needs.</p>
<h2 id="heading-resources">Resources</h2>
<ul>
<li><p><a target="_blank" href="https://github.com/sbmagar13/hashnode-mcp-server">GitHub Repository</a></p>
</li>
<li><p><a target="_blank" href="https://apidocs.hashnode.com/">Hashnode API Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://modelcontextprotocol.io/introduction">Model Context Protocol Documentation</a></p>
</li>
</ul>
<p>Thanks!</p>
<hr />
<p><em>Have you integrated AI tools into your content workflow? Share your experiences in the comments below!</em></p>
]]></content:encoded></item><item><title><![CDATA[Resize The Disk Space of EC2 Instance (Zero Downtime)]]></title><description><![CDATA[Looking for a step-by-step tutorial on how to increase disk space on your AWS EC2 instance?
I get it, I spent a lot of time trying to find the perfect guide myself. So I wrote this to save you the trouble I went through. (Now this article is always a...]]></description><link>https://blog.budhathokisagar.com.np/resize-the-disk-space-of-ec2-instance-zero-downtime</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/resize-the-disk-space-of-ec2-instance-zero-downtime</guid><category><![CDATA[ec2]]></category><category><![CDATA[ebs]]></category><category><![CDATA[AWS]]></category><category><![CDATA[volume]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Thu, 10 Apr 2025 10:14:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746161947838/0bb06bb5-2c08-42ee-a578-79682adb8976.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Looking for a step-by-step tutorial on how to increase disk space on your AWS EC2 instance?</strong></p>
<p>I get it, I spent a <em>lot</em> of time trying to find the perfect guide myself. So I wrote this to save <em>you</em> the trouble I went through. (Now this article is always a go-to solution for me too😎)</p>
<p>You’ve landed in the right place.</p>
<p>In this tutorial, you’ll learn how to resize your EC2 instance’s disk <strong>without</strong> detaching the volume or restarting the server(Easily). AWS provides a block storage solution, EBS, for the instance. EBS’ Elastic Volumes feature allows you to increase volume size while the volume is still in use, making the resizing process much easier and faster without any downtime of the server.</p>
<p>Before extending the size of your EBS volume, It’s good to take a backup of your data with EBS Snapshot. It’s the best practice as if anything goes wrong, you can always restore the data.</p>
<h2 id="heading-create-snapshot">Create Snapshot</h2>
<p>To take a snapshot of your volume,</p>
<ul>
<li><p>Go to EBS volume attached to your instance from EC2 Dashboard.</p>
</li>
<li><p>Click on <strong>Actions</strong> and <strong>Create Snapshot.</strong></p>
</li>
<li><p>Add the <strong>description</strong> value as you wish.</p>
</li>
<li><p>You can add tags also.</p>
</li>
<li><p>Click on <strong>Create Snapshot</strong>. It may take some time.</p>
</li>
</ul>
<h2 id="heading-resize-the-ebs-volumes-of-ec2-instance">Resize The EBS Volumes of EC2 Instance:</h2>
<p>After creating a snapshot of your volume, now it’s time to increase your disk space.</p>
<p>Steps:</p>
<ul>
<li><p>First Go to <strong>Services</strong> &gt; <strong>EC2</strong> &gt; <strong>Instances.</strong></p>
</li>
<li><p>Go to the EBS volume attached to your instance.</p>
</li>
<li><p>From <strong>Actions</strong> click on <strong>Modify Volume.</strong></p>
</li>
<li><p>Add your desired size, I will be replacing <strong>8</strong> with <strong>30</strong>.</p>
</li>
<li><p>You’ll get a confirmation pop-up. Click on <strong>Modify</strong>.</p>
</li>
</ul>
<h3 id="heading-resizing-the-ebs">Resizing the EBS:</h3>
<p>You have successfully added a new volume size to your instance, but you aren’t using the new volume size. For this, SSH to your server or instance and run: <strong><em>df -h</em></strong></p>
<p>Here, you can see our disk partition is still using 8GB. Check the partition size by running commands <strong><em>lsblk</em></strong> and <strong><em>blkid.1</em></strong></p>
<p>Here, <code>xvda1</code> is your current volume with <strong>8GB</strong> and <code>xvda</code> with <strong>30GB.</strong></p>
<p>&lt;aside&gt; 📌 <code>*xvda1</code> or similar is for Xen Virtual Machine based server. For NVMe (Non-Volatile Memory Express) solid-state drive (SSD) based device, it will be <code>nvme0n1p1</code> or similar.*</p>
<p>&lt;/aside&gt;</p>
<p>Now extend the partition:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># For xvda1 partition</span>
$ sudo growpart /dev/xvda 1
<span class="hljs-comment"># For nvme0n1p1 partition</span>
$ sudo growpart /dev/nvme0n1 1
</code></pre>
<p>And, extend the volume:</p>
<pre><code class="lang-bash">$ sudo xfs_growfs /dev/xvda1
$ sudo xfs_growfs /dev/nvme0n1p1
</code></pre>
<p>Type <code>df -h</code> to check the volume size, it must show 30GB.</p>
<p>In our case, since the file system is <code>XFS</code>, we have to use <strong>xfs_growfs</strong> tool.</p>
<p>For file systems <code>ext4</code>, <code>ext2</code>, and <code>ext3</code> you have to use <code>sudo resize2fs /dev/xvda1</code> OR <code>sudo resize2fs /dev/nvme0n1p1</code>.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this way, the volume is now resized and ready to be used without any downtime.</p>
<p>Thank you!</p>
]]></content:encoded></item><item><title><![CDATA[Terraform CLI Tips and Cheatsheets]]></title><description><![CDATA[This article will discuss some very useful Terraform CLI tips and cheatsheets. When you want to use a tool or improve your expertise in a particular technology, it’s good to read many articles and official documentation. However, sometimes having a b...]]></description><link>https://blog.budhathokisagar.com.np/terraform-cli-tips-and-cheatsheets</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/terraform-cli-tips-and-cheatsheets</guid><category><![CDATA[Terraform]]></category><category><![CDATA[cheatsheet]]></category><category><![CDATA[Devops]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Fri, 04 Oct 2024 08:33:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728030752422/6a49c7ae-42bd-4252-a8c8-32717f7b006a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This article will discuss some very useful Terraform CLI tips and cheatsheets. When you want to use a tool or improve your expertise in a particular technology, it’s good to read many articles and official documentation. However, sometimes having a brief cheat sheet of it, can be very handy.</p>
<h2 id="heading-terraform"><strong>Terraform</strong></h2>
<p>Terraform, a Go-based program released by <a target="_blank" href="https://www.hashicorp.com/">Hashicorp</a> in 2014, is used to build, change, and version control your infrastructure(IaC). This has an extremely strong and user-friendly Command Line Interface.</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<ul>
<li><p>Basics of <a target="_blank" href="https://scanskill.com/lesson/terraform/">Terraform</a> – IaC (Terraform CLI)</p>
</li>
<li><p>Command Line basics</p>
</li>
</ul>
<h2 id="heading-terraform-cli-tips-and-cheatsheets"><strong>Terraform CLI Tips and Cheatsheets</strong></h2>
<h3 id="heading-installation"><strong>Installation</strong></h3>
<h4 id="heading-install-through-curl"><strong>Install through curl</strong></h4>
<pre><code class="lang-bash">$ curl -O &lt;https://releases.hashicorp.com/terraform/1.9.7/terraform_1.9.7_linux_amd64.zip&gt;
$ sudo unzip terraform_1.9.7_linux_amd64.zip -d /usr/<span class="hljs-built_in">local</span>/bin/
$ rm terraform_1.9.7_linux_amd64.zip
</code></pre>
<h4 id="heading-install-using-tfenv-terraform-version-manager"><strong>Install using tfenv (Terraform Version Manager)</strong></h4>
<p>First of all, download the tfenv binary and put it in your PATH.</p>
<pre><code class="lang-bash">$ git <span class="hljs-built_in">clone</span> &lt;https://github.com/tfutils/tfenv.git&gt; ~/.tfenv
$ <span class="hljs-built_in">echo</span> <span class="hljs-string">'export PATH="$HOME/.tfenv/bin:$PATH"'</span> &gt;&gt; <span class="hljs-variable">$HOME</span>/bashrc
</code></pre>
<p>Then, you can install the desired version of Terraform:</p>
<pre><code class="lang-bash">$ tfenv install 1.9.7
</code></pre>
<h3 id="heading-usage"><strong>Usage</strong></h3>
<h4 id="heading-version-check"><strong>Version Check</strong></h4>
<pre><code class="lang-bash">$ terraform --version

Terraform v1.9.7
on linux_amd64
</code></pre>
<h4 id="heading-terraform-init"><strong>Terraform init</strong></h4>
<p>The following command is used to initialize the terraform project:</p>
<pre><code class="lang-bash">$ terraform init
</code></pre>
<p>It’s the first command you need to execute. Unless <code>terraform plan</code>, <code>apply</code>, <code>destroy</code> and <code>import</code> will not work. This command <code>terraform init</code>  will install the followings:</p>
<ul>
<li><p>Terraform modules</p>
</li>
<li><p>Backend</p>
</li>
<li><p>Provider(s) plugins</p>
</li>
</ul>
<p>You can initialize the Terraform without any input prompt. To do this, run the following command:</p>
<pre><code class="lang-bash">$ terraform init -input=<span class="hljs-literal">false</span>
</code></pre>
<p>Also, if you want to change the backend configuration during the init command, run the following command:</p>
<pre><code class="lang-bash">$ terraform init -backend-config=proj/s3.dev.tf -reconfigure
</code></pre>
<p>Here, <code>reconfigure</code> is used to tell Terraform to not copy the existing state to the new remote state location.</p>
<h4 id="heading-terraform-get"><strong>Terraform Get</strong></h4>
<p>This command is very useful when you have defined some modules. And if you edit modules, you need to get modules content again.</p>
<pre><code class="lang-bash">$ terraform get -update=<span class="hljs-literal">true</span>
</code></pre>
<p>When using modules, the first thing you’ll need to do is a <code>terraform get</code>. This appends modules to the .terraform directory. Unless you do another <code>terraform get -update=true</code>, you’ve effectively vendored those components.</p>
<h4 id="heading-terraform-plan"><strong>Terraform Plan</strong></h4>
<p>The <strong>plan</strong> step validates the configuration for execution and creates a plan to be applied to the target infrastructure provider.</p>
<pre><code class="lang-bash">$ terraform plan -out plan.out
</code></pre>
<p>It’s a crucial Terraform tool that lets users know the actions Terraform will do before making any changes, providing confidence that a modification will have the desired effect once deployed.</p>
<p>When you execute <code>terraform plan</code>, Terraform will scan all *<strong><em>.tf</em></strong> files in your directory and create the plan.</p>
<h4 id="heading-terraform-apply"><strong>Terraform Apply</strong></h4>
<p>Now it’s time to execute the plan:</p>
<pre><code class="lang-bash">$ terraform apply plan.out
</code></pre>
<p>Terraform can guarantee that the execution plan will not change by generating it and applying it in the same command without the need to write it to disk. This decreases the possibility of potentially sensitive data being left behind or being incorrectly checked into version control.</p>
<pre><code class="lang-bash">$ terraform apply
</code></pre>
<ul>
<li><strong>Apply and Auto Approve</strong></li>
</ul>
<pre><code class="lang-bash">$ terraform apply -auto-approve
</code></pre>
<ul>
<li><strong>Apply and Define New Variables Value</strong></li>
</ul>
<pre><code class="lang-bash">$ terraform apply -auto-approve -var tags-repository_url=<span class="hljs-variable">${GIT_URL}</span>
</code></pre>
<ul>
<li><strong>Apply Only One Module</strong></li>
</ul>
<pre><code class="lang-bash">$ terraform apply -target=module.s3
</code></pre>
<p>This <code>-target</code> option works with <em>terraform plan</em> too.</p>
<h4 id="heading-terraform-destroy"><strong>Terraform Destroy</strong></h4>
<pre><code class="lang-bash">$ terraform destroy
</code></pre>
<p>Delete all the resources!</p>
<p>A deletion plan can be created before:</p>
<pre><code class="lang-bash">$ terraform plan –destroy
</code></pre>
<ul>
<li><code>target</code> option allows to destroy only one resource, for example, an S3 bucket :</li>
</ul>
<pre><code class="lang-bash">$ terraform destroy -target aws_s3_bucket.my_bucket
</code></pre>
<h4 id="heading-debugging-in-terraform"><strong>Debugging in Terraform</strong></h4>
<p>In Terraform <code>terraform console</code> command is useful for testing interpolations before using them in configurations. Terraform console will read the configured state even if it is remote.</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">echo</span> <span class="hljs-string">"aws_iam_user.sagar.arn"</span> | terraform console
</code></pre>
<pre><code class="lang-bash">arn:aws:iam::123456789:user/sagar
</code></pre>
<h4 id="heading-graph-in-terraform"><strong>Graph in Terraform</strong></h4>
<pre><code class="lang-bash">$ terraform graph | dot –Tpng &gt; graph.png
</code></pre>
<p>Visual representation (graph) of Terraform resources.</p>
<h4 id="heading-terraform-validate"><strong>Terraform Validate</strong></h4>
<p>The validate command is used to validate/check the syntax of the Terraform files. A syntax check is done on all the Terraform files in the directory and will display an error if any of the files don’t validate. The syntax check does not cover every syntax common issue.</p>
<pre><code class="lang-bash">$ terraform validate
</code></pre>
<h4 id="heading-providers"><strong>Providers</strong></h4>
<p>You can use a lot of providers/plugins in your Terraform definition resources, so it can be useful to have a tree of providers used by modules in your project.</p>
<pre><code class="lang-bash">$ terraform providers
.
├── provider.aws ~&gt; 4.49.0
├── module.my_module
│   ├── provider.aws (inherited)
│   ├── provider.null
│   └── provider.template
└── module.elastic
└── provider.aws (inherited)
</code></pre>
<h3 id="heading-state"><strong>State</strong></h3>
<h4 id="heading-pull-remote-state-in-a-local-copy"><strong>Pull Remote State in A Local Copy</strong></h4>
<pre><code class="lang-bash">$ terraform state pull &gt; terraform.tfstate
</code></pre>
<h4 id="heading-push-state-in-a-remote-backend-storage"><strong>Push State in a Remote Backend storage</strong></h4>
<pre><code class="lang-bash">$ terraform state push
</code></pre>
<p>This command is useful if, for example, you originally use a local tf state and then you define backend storage, in S3 or Consul…</p>
<h4 id="heading-how-to-tell-to-terraform-you-moved-a-resource-in-a-module"><strong>How to Tell to Terraform You Moved a Resource in A Module?</strong></h4>
<p>If you moved an existing resource in a module, you need to update the state:</p>
<pre><code class="lang-bash">$ terraform state mv aws_iam_role.role1 module.mymodule
</code></pre>
<h4 id="heading-how-to-import-existing-resources-in-terraform"><strong>How to Import Existing Resources in Terraform?</strong></h4>
<p>If you have an existing resource in your infrastructure provider, you can import it in your Terraform state:</p>
<pre><code class="lang-bash">$ terraform import aws_iam_policy.elastic_post
</code></pre>
<pre><code class="lang-bash">arn:aws:iam::123456789:policy/elastic_post
</code></pre>
<h3 id="heading-workspaces"><strong>Workspaces</strong></h3>
<p>To manage multiple distinct sets of infrastructure resources/environments.</p>
<p>Instead of creating a directory for each environment to manage, we need to just create needed workspace and use them:</p>
<h4 id="heading-create-workspace"><strong>Create Workspace</strong></h4>
<p>This command creates a new workspace and then select it</p>
<pre><code class="lang-bash">$ terraform workspace new dev
</code></pre>
<h4 id="heading-select-a-workspace"><strong>Select a Workspace</strong></h4>
<pre><code class="lang-bash">$ terraform workspace select dev
</code></pre>
<h4 id="heading-list-workspaces"><strong>List Workspaces</strong></h4>
<pre><code class="lang-bash">$ terraform workspace list

default
* dev
staging
</code></pre>
<h4 id="heading-show-current-workspace"><strong>Show Current Workspace</strong></h4>
<pre><code class="lang-bash">$ terraform workspace show

dev
</code></pre>
<h3 id="heading-tools"><strong>Tools</strong></h3>
<h4 id="heading-1-jq"><strong>1. jq</strong></h4>
<p><strong><em>jq</em></strong> is a command-line JSON processor. It’s very lightweight and it can be used with Terraform output to make Terraform more powerful.</p>
<h5 id="heading-installation-1"><strong>Installation</strong></h5>
<p><strong>For Linux:</strong></p>
<pre><code class="lang-bash">$ sudo apt-get install jq
</code></pre>
<p>or</p>
<pre><code class="lang-bash">$ yum install jq
</code></pre>
<p><strong>For OS X:</strong></p>
<pre><code class="lang-bash">$ brew install jq
</code></pre>
<h5 id="heading-usage-1"><strong>Usage</strong></h5>
<p>For example, we defined outputs in a module and when we execute <em>terraform apply</em> outputs are displayed:</p>
<pre><code class="lang-bash">$ terraform apply

...
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:
elastic_endpoint = vpc-toto-12fgfd4d5f4ds5fngetwe4.ap-south-1.es.amazonaws.com
</code></pre>
<p>We can extract the value that we want to use in a script for example. With <strong><em>jq</em></strong> it’s easy:</p>
<pre><code class="lang-bash">$ terraform output -json

{
    <span class="hljs-string">"elastic_endpoint"</span>: {
        <span class="hljs-string">"sensitive"</span>: <span class="hljs-literal">false</span>,
        <span class="hljs-string">"type"</span>: <span class="hljs-string">"string"</span>,
        <span class="hljs-string">"value"</span>: <span class="hljs-string">"vpc-toto-12fgfd4d5f4ds5fngetwe4.ap-south-1.es.amazonaws.com"</span>
    }
}
</code></pre>
<pre><code class="lang-bash">$ terraform output -json | jq <span class="hljs-string">'.elastic_endpoint.value'</span>

<span class="hljs-string">"vpc-toto-12fgfd4d5f4ds5fngetwe4.ap-south-1.es.amazonaws.com"</span>
</code></pre>
<h4 id="heading-2-terraforming"><strong>2. Terraforming</strong></h4>
<p>If you have an existing AWS account for example with existing components like S3 buckets, SNS, VPC … You can use the Terraforming tool, a tool written in Ruby, which extracts existing AWS resources and converts them to Terraform files!</p>
<h5 id="heading-installation-2"><strong>Installation</strong></h5>
<pre><code class="lang-bash">$ sudo apt install ruby 
<span class="hljs-comment"># OR </span>
$ sudo yum install ruby
</code></pre>
<p>and</p>
<pre><code class="lang-bash">$ gem install terraforming
</code></pre>
<h5 id="heading-usage-2"><strong>Usage</strong></h5>
<h6 id="heading-pre-requisites"><strong>Pre-requisites:</strong></h6>
<p>As in Terraform, you need to set AWS credentials:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">export</span> AWS_ACCESS_KEY_ID=<span class="hljs-string">"an_aws_access_key"</span>
$ <span class="hljs-built_in">export</span> AWS_SECRET_ACCESS_KEY=<span class="hljs-string">"a_aws_secret_key"</span>
$ <span class="hljs-built_in">export</span> AWS_DEFAULT_REGION=<span class="hljs-string">"eu-central-1"</span>
</code></pre>
<p>You can also specify the credential profile in <em>~/.aws/credentials and with –profile</em> option.</p>
<pre><code class="lang-bash">$ cat ~/.aws/credentials

[sagar]
aws_access_key_id = xxx
aws_secret_access_key = xxx
aws_default_region = eu-central-1
</code></pre>
<pre><code class="lang-bash">$ terraforming s3 --profile sagar
</code></pre>
<h6 id="heading-example"><strong>Example</strong></h6>
<pre><code class="lang-bash">$ terraforming --<span class="hljs-built_in">help</span>

Commands:
terraforming alb <span class="hljs-comment"># ALB</span>
...
terraforming vgw <span class="hljs-comment"># VPN Gateway</span>
terraforming vpc <span class="hljs-comment"># VPC</span>
</code></pre>
<pre><code class="lang-bash">$ terraforming s3 &gt; aws_s3.tf
</code></pre>
<p><strong><em>Note:</em></strong> <em>Terraform can’t extract API gateway resources for the moment so you need to write it manually.</em></p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In this article, I talked about some of the important Terraform CLI tips and cheatsheets. There is still a lot more than this, but you can go and explore them yourself. Thank you!</p>
]]></content:encoded></item><item><title><![CDATA[Linting The Docker Image With Dockle]]></title><description><![CDATA[In this article, I’ll demonstrate how the Linting Docker image works with Dockle. By the end of this post, you’ll get to know in detail about linting docker images with Dockle.
You need to lint the container docker images to enforce security aspects ...]]></description><link>https://blog.budhathokisagar.com.np/linting-the-docker-image-with-dockle</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/linting-the-docker-image-with-dockle</guid><category><![CDATA[Linux]]></category><category><![CDATA[Docker]]></category><category><![CDATA[docker images]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Tue, 01 Oct 2024 09:44:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727775780972/ecf24ab8-4832-44fe-8b4b-ac88c78c57a2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, I’ll demonstrate how <strong><em>the Linting Docker image works with Dockle</em></strong>. By the end of this post, you’ll get to know in detail about linting docker images with Dockle.</p>
<p>You need to lint the container docker images to enforce security aspects of container images which minimize the attack layers by hardening the individual container images. It is the best security practice pattern to use linting along with vulnerability scanning.</p>
<h2 id="heading-dockle"><strong>Dockle</strong></h2>
<p><code>Dockle</code> is a linter for container images. <code>Dockle</code>, in comparison to other linters, gives us confidence that our container images were built in accordance with well-known, proven security best practices. <code>Dockle</code>, for example, verifies many best practices specified as part of the <a target="_blank" href="https://www.cisecurity.org/cis-benchmarks/">CIS benchmarks</a>. It is also critical to remember that <code>Dockle</code> does not lint Dockerfiles. Container images are linted.</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<ul>
<li><p>Understanding of Docker</p>
</li>
<li><p>docker configured on your system</p>
</li>
</ul>
<h2 id="heading-set-up-linting-docker-image-with-dockle"><strong>Set Up: Linting Docker image with Dockle</strong></h2>
<p>Now, you know a bit about <strong><em>Dockle.</em></strong> So, let’s get dive into the practical implementation of Dockle:</p>
<h3 id="heading-dockle-installation"><strong>Dockle Installation</strong></h3>
<h4 id="heading-on-linux"><strong>On Linux</strong></h4>
<p>You can install Dockle on Linux in a different way according to your distribution. [In my case, I’m using Arch-based Linux so I’ll be using <em>Arch User Repository(AUR)</em>] {First Priority :-)}</p>
<p><strong>For Arch-based distro</strong></p>
<pre><code class="lang-bash"><span class="hljs-comment"># clone the repo</span>
$ git <span class="hljs-built_in">clone</span> &lt;https://aur.archlinux.org/dockle-bin.git&gt;

$ <span class="hljs-built_in">cd</span> dockle-bin

<span class="hljs-comment"># build and install the package</span>
$ makepkg -sri
</code></pre>
<p><strong>For Ubuntu/Debian</strong></p>
<pre><code class="lang-bash">$ VERSION=$(
 curl --silent <span class="hljs-string">"&lt;https://api.github.com/repos/goodwithtech/dockle/releases/latest&gt;"</span> | \\
 grep <span class="hljs-string">'"tag_name":'</span> | \\
 sed -E <span class="hljs-string">'s/.*"v([^"]+)".*/\\1/'</span> \\
) &amp;&amp; curl -L -o dockle.deb &lt;https://github.com/goodwithtech/dockle/releases/download/v<span class="hljs-variable">${VERSION}</span>/dockle_<span class="hljs-variable">${VERSION}</span>_Linux-64bit.deb&gt;

<span class="hljs-comment"># Extract and delete the pakage</span>
$ sudo dpkg -i dockle.deb &amp;&amp; rm dockle.deb
</code></pre>
<p><strong>For RHEL/CentOS</strong></p>
<pre><code class="lang-bash">$ VERSION=$(
 curl --silent <span class="hljs-string">"&lt;https://api.github.com/repos/goodwithtech/dockle/releases/latest&gt;"</span> | \\
 grep <span class="hljs-string">'"tag_name":'</span> | \\
 sed -E <span class="hljs-string">'s/.*"v([^"]+)".*/\\1/'</span> \\
) &amp;&amp; rpm -ivh &lt;https://github.com/goodwithtech/dockle/releases/download/v<span class="hljs-variable">${VERSION}</span>/dockle_<span class="hljs-variable">${VERSION}</span>_Linux-64bit.rpm&gt;
</code></pre>
<h4 id="heading-on-windows"><strong>On Windows</strong></h4>
<p>Run the following in the command prompt:</p>
<pre><code class="lang-bash">$ VERSION=$(
 curl --silent <span class="hljs-string">"&lt;https://api.github.com/repos/goodwithtech/dockle/releases/latest&gt;"</span> | \\
 grep <span class="hljs-string">'"tag_name":'</span> | \\
 sed -E <span class="hljs-string">'s/.*"v([^"]+)".*/\\1/'</span> \\
) &amp;&amp; curl -L -o dockle.zip &lt;https://github.com/goodwithtech/dockle/releases/downlLinting docker image with dockleoad/v<span class="hljs-variable">${VERSION}</span>/dockle_<span class="hljs-variable">${VERSION}</span>_Windows-64bit.zip&gt;

<span class="hljs-comment"># Extract and delete the ZIP archive</span>
$ unzip dockle.zip &amp;&amp; rm dockle.zip
$ ./dockle.exe [IMAGE_NAME]
</code></pre>
<h4 id="heading-on-macos"><strong>On macOS</strong></h4>
<p>For Mac, you can use Homebrew;</p>
<pre><code class="lang-bash">$ brew install goodwithtech/r/dockle
</code></pre>
<h2 id="heading-implementation-linting-docker-image-with-dockle"><strong>Implementation: Linting docker image with Dockle</strong></h2>
<p>It’s time to lint some images now that you’ve installed Dockle-CLI on your machine. If you’re already using container technologies, you probably have some container images on your local machine. However, for the sake of demonstration, let’s create a simple web server image.</p>
<h3 id="heading-create-and-build-a-sample-web-server-image"><strong>Create and build a <em>sample</em> web server image</strong></h3>
<p>To do so create a file <code>Dockerfile</code> and add the following lines:</p>
<pre><code class="lang-bash">FROM nginx:alpine
EXPOSE 80
</code></pre>
<p>Build the container by running the following command:</p>
<pre><code class="lang-bash">$ docker build -t example:latest .

Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM nginx:alpine
alpine: Pulling from library/nginx
c158987b0551: Pull complete 
1e35f6679fab: Pull complete 
cb9626c74200: Pull complete 
b6334b6ace34: Pull complete 
f1d1c9928c82: Pull complete 
9b6f639ec6ea: Pull complete 
ee68d3549ec8: Pull complete 
Digest: sha256:dd8a054d7ef030e94a6449783605d6c306c1f69c10c2fa06b66a030e0d1db793
Status: Downloaded newer image <span class="hljs-keyword">for</span> nginx:alpine
 ---&gt; 1e415454686a
Step 2/2 : EXPOSE 80
 ---&gt; Running <span class="hljs-keyword">in</span> bbfb3fd7cb20
Removing intermediate container bbfb3fd7cb20
 ---&gt; 0ccb22ae7702
Successfully built 0ccb22ae7702
Successfully tagged example:latest
</code></pre>
<h4 id="heading-linting-docker-image-with-dockle"><strong>Linting docker image with Dockle</strong></h4>
<p>Now, let’s lint the container image using the <strong>Dockle</strong> CLI command:</p>
<pre><code class="lang-bash">$ dockle example:latest
</code></pre>
<p>Output:</p>
<p><img src="https://scanskill.com/wp-content/uploads/2022/12/Selection_505.png" alt="linting docker image" /></p>
<p>linting docker image with dockle</p>
<h4 id="heading-explanation"><strong>Explanation</strong></h4>
<p>As you can see, we have a number of warnings and information. Although we can address the majority of the issues by updating our <code>Dockerfile</code>, <code>CIS-DI-0005</code> requires you to configure your local Docker installation to sign container images (Docker Content Trust). We can also address <code>DKL-DI-0006</code> by constructing our image with a tag other than the latest. First, let’s address any issues that we discovered in the <code>Dockerfile</code>. Update the <code>Dockerfile</code> as shown in the example below:</p>
<pre><code class="lang-bash">FROM nginx:alpine
EXPOSE 80

<span class="hljs-comment"># Adding health check to address CIS-DI-0006</span>
HEALTHCHECK --interval=30s --timeout=2s --start-period=5s --retries=3 CMD curl -f &lt;http://localhost/index.html&gt; || <span class="hljs-built_in">exit</span> 1

<span class="hljs-comment"># Add a new user in a group(new in this case)</span>
RUN addgroup -S cloudyfox &amp;&amp; adduser -S sagar -G cloudyfox \\
 &amp;&amp; mkdir -p /var/run/nginx /var/tmp/nginx \\
 &amp;&amp; chown -R sagar:cloudyfox /usr/share/nginx /var/run/nginx /var/tmp/nginx

<span class="hljs-comment"># Copy custom NGINX configuration to the image</span>
COPY nginx.conf /etc/nginx/nginx.conf

<span class="hljs-comment"># Switch user context to address CIS-DI-0001</span>
USER sagar:cloudyfox
</code></pre>
<p>Now, again create a <code>nginx.conf</code> configuration file in the same directory as <code>Dockerfile</code> and add the following snippet:</p>
<pre><code class="lang-bash">worker_processes 1;
error_log /var/<span class="hljs-built_in">log</span>/nginx/error.log warn;
pid    /var/run/nginx/nginx.pid;
events {
  worker_connections 1024;
}
http {
  client_body_temp_path /var/tmp/nginx/client_body;
  fastcgi_temp_path /var/tmp/nginx/fastcgi_temp;
  proxy_temp_path /var/tmp/nginx/proxy_temp;
  scgi_temp_path /var/tmp/nginx/scgi_temp;
  uwsgi_temp_path /var/tmp/nginx/uwsgi_temp;
  include    /etc/nginx/mime.types;
  default_type application/octet-stream;
  log_format main <span class="hljs-string">'$remote_addr - $remote_user [$time_local] "$request" '</span>
           <span class="hljs-string">'$status $body_bytes_sent "$http_referer" '</span>
           <span class="hljs-string">'"$http_user_agent" "$http_x_forwarded_for"'</span>;
  access_log /var/<span class="hljs-built_in">log</span>/nginx/access.log main;
  sendfile    on;
  keepalive_timeout 65;
  include /etc/nginx/conf.d/*.conf;
}
</code></pre>
<p>Finally, let’s build again our <code>Dockerfile</code> with the tag as <code>0.1.0</code> which addresses <code>DKL-DI-0006</code>:</p>
<pre><code class="lang-bash">$ docker build -t example:0.1.0 .

Sending build context to Docker daemon  4.096kB
Step 1/6 : FROM nginx:alpine
 ---&gt; 1e415454686a
Step 2/6 : EXPOSE 80
 ---&gt; Using cache
 ---&gt; 0ccb22ae7702
Step 3/6 : HEALTHCHECK --interval=30s --timeout=2s --start-period=5s --retries=3 CMD curl -f &lt;http://localhost/index.html&gt; || <span class="hljs-built_in">exit</span> 1
 ---&gt; Running <span class="hljs-keyword">in</span> ad633cc5a53f
Removing intermediate container ad633cc5a53f
 ---&gt; 85e8fae17637
Step 4/6 : RUN addgroup -S cloudyfox &amp;&amp; adduser -S sagar -G cloudyfox  &amp;&amp; mkdir -p /var/run/nginx /var/tmp/nginx  &amp;&amp; chown -R sagar:cloudyfox /usr/share/nginx /var/run/nginx /var/tmp/nginx
 ---&gt; Running <span class="hljs-keyword">in</span> 197fe5351548
Removing intermediate container 197fe5351548
 ---&gt; 3f2d6651de9e
Step 5/6 : COPY nginx.conf /etc/nginx/nginx.conf
 ---&gt; 1902f88b672c
Step 6/6 : USER sagar:cloudyfox
 ---&gt; Running <span class="hljs-keyword">in</span> b4d9cd554de9
Removing intermediate container b4d9cd554de9
 ---&gt; bc03ec6dd426
Successfully built bc03ec6dd426
</code></pre>
<p>Now, lint the docker image:</p>
<pre><code class="lang-bash">$ dockle example:0.1.0

INFO    - CIS-DI-0005: Enable Content trust <span class="hljs-keyword">for</span> Docker
        * <span class="hljs-built_in">export</span> DOCKER_CONTENT_TRUST=1 before docker pull/build
</code></pre>
<p>And let’s enable Content trust for Docker:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">export</span> DOCKER_CONTENT_TRUST=1
</code></pre>
<p>Again build the new version with <code>0.1.1</code> and lint the docker container image:</p>
<pre><code class="lang-bash">$ docker build -t example:0.1.1 .
</code></pre>
<p>Now, you don’t see any warnings or findings, which means you have successfully passed the <code>dockle</code> linting test.</p>
<p>That’s it.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In this, we practically went through <strong>linting the Docker image with Dockle</strong>, why container images should be linted, and how we can use <code>dockle</code> as a linter along with the installation process.</p>
<p>In short, you learned linting container images with Dockle which will discover weak container images and provide detailed information about how to improve, harden, and optimize our container images.</p>
<p>Thank you!</p>
]]></content:encoded></item><item><title><![CDATA[Install MongoDB on EC2 Instance — Solved Connection Issue From Public DNS]]></title><description><![CDATA[In this article, we will install MongoDB on an EC2 instance in AWS. Installing MongoDB on EC2 via aptitude is very simple. To install MongoDB on your EC2 Ubuntu system you can follow the official MongoDB-org package, which MongoDB Inc. maintains.
Pre...]]></description><link>https://blog.budhathokisagar.com.np/install-mongodb-on-ec2-instance-solved-connection-issue-from-public-dns</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/install-mongodb-on-ec2-instance-solved-connection-issue-from-public-dns</guid><category><![CDATA[MongoDB]]></category><category><![CDATA[ec2]]></category><category><![CDATA[AWS]]></category><category><![CDATA[dns]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Tue, 24 Sep 2024 05:16:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727154720858/6ee6508c-1818-41a8-9c8b-87e3878ebeff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>In this article, we will install MongoDB on an EC2 instance in AWS. Installing MongoDB on EC2 via aptitude is very simple. To install MongoDB on your EC2 Ubuntu system you can follow the official MongoDB-org package, which MongoDB Inc. maintains.</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>AWS Account</p>
<hr />
<h3 id="heading-launch-ec2-instance">Launch EC2 Instance</h3>
<p>Pick any AMI (for this I am using <strong><em>Ubuntu 20.04)</em></strong>, select the desired <strong><em>instance type</em></strong>, <strong><em>Storage</em></strong>, and configure the proper VPC, subnet, etc.</p>
<p>Create or pick an existing security group that has an SSH port enabled in the inbound rule.</p>
<p>Launch the instance by creating a new one or using an existing keypair. For detailed instructions on launching instances, follow the <a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html">official documentation</a>.</p>
<h3 id="heading-install-mongodb-on-ec2">Install MongoDB on EC2</h3>
<p>SSH into the server instance by running the following command: (Make sure your instance’s private key directory)</p>
<pre><code class="lang-bash">$ chmod 400 &lt;keypair name&gt; $ ssh -i ~/dir/&lt;keypair name&gt; ubuntu@&lt;EC2 instance IP address&gt;
</code></pre>
<p>Now, import a private key repository package for MongoDB to install on your server:</p>
<pre><code class="lang-bash">$ wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add -
</code></pre>
<p>And add sources to your system:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse"</span> | sudo tee /etc/apt/sources.list.d/mongodb-org-4.2.list
</code></pre>
<p>Update the package:</p>
<pre><code class="lang-bash">$ sudo apt update
</code></pre>
<p>Now you’re ready to install MongoDB on your system:</p>
<pre><code class="lang-bash">$ sudo apt install -y mongodb-org
</code></pre>
<p>You’ve successfully installed MongoDB on your system, let’s start the mongo service and verify:</p>
<pre><code class="lang-bash">$ sudo systemctl start mongod $ sudo systemctl status mongod
</code></pre>
<p>You can enable the service to start every time you reboot the system by running the following command:</p>
<pre><code class="lang-bash">$ sudo systemctl <span class="hljs-built_in">enable</span> mongod
</code></pre>
<p>Here, you’ve successfully installed MongoDB on your Ubuntu Server in AWS.</p>
<h3 id="heading-connect-to-remote-mongodb-server">Connect To Remote MongoDB Server</h3>
<p>In this section, we’ll set up user authentication for Mongo so that we can read and write to the MongoDB server.</p>
<h3 id="heading-user-setup">User Setup</h3>
<p>SSH into the server and type <code>mongo</code> to run Mongo shell. For this tutorial, I'm gonna set up a user <code>sagar</code> and give read-write access to the <code>example_db</code> database.</p>
<pre><code class="lang-bash">use example_db
</code></pre>
<pre><code class="lang-bash">db.createUser({ user: <span class="hljs-string">'sagar'</span>, <span class="hljs-built_in">pwd</span>: <span class="hljs-string">'my-password'</span>, roles: [{ role: <span class="hljs-string">'readWrite'</span>, db:<span class="hljs-string">'example_db'</span>}] })
</code></pre>
<h3 id="heading-enable-mongodb-access-to-all-ips">Enable MongoDB access to all IPs</h3>
<p>Edit <code>/etc/mongod.conf</code> file:</p>
<pre><code class="lang-bash">$ sudo nano /etc/mongod.conf
</code></pre>
<p>Look for the <code>net</code> line and comment out the <code>bindIp</code> line under it, which is currently limiting MongoDB connections to.</p>
<p><strong><em>Note:</em></strong> <em>do not comment out the</em> <code>bindIp</code> <em>line without enabling authorization, authorization can be enabled by un-commenting</em> <code># security</code> <em>section and adding</em> <code>authorization: 'enabled'</code><em>.</em></p>
<pre><code class="lang-bash"><span class="hljs-comment"># network interfaces net: port: 27017 # bindIp: 127.0.0.1 &lt;-- comment out this</span>
</code></pre>
<pre><code class="lang-bash">security: authorization: <span class="hljs-string">'enabled'</span>
</code></pre>
<h3 id="heading-open-port-27017-on-your-ec2-server">Open port 27017 on your EC2 server</h3>
<p>Go to <code>Security Groups</code> of your instance.<br />Edit the inbound rule on your server's security group by allowing Custom TCP on the port <code>27017</code> (you can set the traffic source as anywhere <code>0.0.0.0/0</code> or as per your requirements).</p>
<h3 id="heading-restart-and-check-the-status-of-mongo-daemon">Restart and Check the Status of Mongo Daemon</h3>
<p>Restart:</p>
<pre><code class="lang-bash">$ sudo service mongod restart
</code></pre>
<p>Check the status:</p>
<pre><code class="lang-bash">$ sudo service mongod status
</code></pre>
<p><img src="https://cdn-images-1.medium.com/max/800/0*MMejjoctTVpev-Wq" alt /></p>
<p><img src="https://cdn-images-1.medium.com/max/800/0*3GK13FWB_F5owvFH" alt /></p>
<p>If anything goes wrong or not, you can always check the logs:</p>
<pre><code class="lang-bash">$ sudo tail -f /var/<span class="hljs-built_in">log</span>/mongodb/mongod.log
</code></pre>
<p><img src="https://cdn-images-1.medium.com/max/800/0*MZUAJAMsTODluIW_" alt /></p>
<p><img src="https://cdn-images-1.medium.com/max/800/0*YtcdH0plNzgCJVF7" alt /></p>
<h3 id="heading-accessing-mongodb">Accessing MongoDB</h3>
<h4 id="heading-using-mongo-shell">Using Mongo Shell:</h4>
<p>Access the remote Mongo Database we just set up:</p>
<pre><code class="lang-bash">$ mongo -u sagar -p my-password &lt;Instance<span class="hljs-string">'s public IP&gt;/example_db</span>
</code></pre>
<p>Here you go, now you can read and write within the <code>example_db</code> database without <code>ssh</code>.</p>
<h4 id="heading-using-mongo-client">Using Mongo Client:</h4>
<pre><code class="lang-bash">Host = mongodb://sagar:my-password@&lt;Instance<span class="hljs-string">'s public IP&gt;/example_db Port = 27017 (default)</span>
</code></pre>
<h3 id="heading-fix-1-cannot-access-from-other-ips">Fix-1 Cannot access from other IPs</h3>
<p>By default, the MongoDB server only allows connections from localhost(127.0.0.1). So, if you face an issue connecting the database, you can fix this by binding all IPs.</p>
<p>To allow connections from elsewhere in your VPC edit <code>/etc/mongod.conf</code>:</p>
<p>Look for the <code>net</code> line and replace <code>bindIp</code> value with <code>0.0.0.0, ::</code>, to bind all IPV4 and IPV6 addresses, which is currently limiting MongoDB connections to <code>localhost</code>.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># network interfaces net: port: 27017 bindIp: 127.0.0.1 # change this to 0.0.0.0, to bind to all IP addresses</span>
</code></pre>
<p><strong><em>Note:</em></strong> <em>make sure you have enabled authorization on</em> <code>security:</code> <em>section by adding</em> <code>authorization: 'enabled'</code><em>(as we've already done above), to forbid the access to Mongo database on your server</em>.</p>
<p>Restart the Mongo daemon(MongoDB).</p>
<p>Now, you’re good to go ahead.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>In this tutorial, you have successfully learned to install MongoDB on EC2 and access from the shell as well as Mongo clients.</p>
<p>Thanks!</p>
<p>keep supporting!!</p>
]]></content:encoded></item><item><title><![CDATA[ElasticSearch on AWS EC2 using Terraform]]></title><description><![CDATA[In this, we’ll learn to set up an ElasticSearch Stack on AWS EC2. Elastic Stack consists of ElasticSearch, Filebeat, LogStash, and Kibana(ELK stack) which brings all the logs and traces into a single place. This is one of the most popular tools for s...]]></description><link>https://blog.budhathokisagar.com.np/elasticsearch-on-aws-ec2-using-terraform</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/elasticsearch-on-aws-ec2-using-terraform</guid><category><![CDATA[elasticsearch]]></category><category><![CDATA[elastic stack]]></category><category><![CDATA[filebeat]]></category><category><![CDATA[logstash]]></category><category><![CDATA[kibana]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[ec2]]></category><category><![CDATA[AWS]]></category><category><![CDATA[aws ec2]]></category><category><![CDATA[logging]]></category><category><![CDATA[cluster]]></category><category><![CDATA[Devops]]></category><category><![CDATA[monitoring]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Wed, 03 Jul 2024 05:36:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1719984994982/07d6614b-2288-4036-b63a-884a1d626101.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this, we’ll learn to set up an <strong>ElasticSearch</strong> Stack on <strong>AWS EC2</strong>. Elastic Stack consists of <strong>ElasticSearch</strong>, <strong>Filebeat</strong>, <strong>LogStash</strong>, and <strong>Kibana</strong>(ELK stack) which brings all the logs and traces into a single place. This is one of the most popular tools for storing and viewing logs.</p>
<p>In Elastic Stack—</p>
<ul>
<li><p><strong>ElasticSearch</strong> is used to store data.</p>
</li>
<li><p><strong>Filebeat</strong> transfers the logs into ElasticSearch through LogStash</p>
</li>
<li><p><strong>LogStash</strong> filters the logs</p>
</li>
<li><p><strong>Kibana</strong> helps in visualizing the data and navigating the logs.</p>
</li>
</ul>
<p>In this article, we’re going to deploy <strong>ElasticStack on AWS EC2 using Terraform</strong>.</p>
<h2 id="heading-why-run-your-own-elastic-stack-on-aws-ec2-instead-of-hosted-services"><strong>Why run your own Elastic Stack on AWS EC2 instead of hosted services</strong></h2>
<p>We can create <strong>ElasticSearch</strong> in AWS either by using <strong>Elastic Cloud</strong> or by using <strong>AWS ElasticSearch Service(OpenSearch)</strong>. But running our own <strong>ElasticSearch</strong> on <strong>AWS EC2</strong> instead of hosted services has the following advantages:</p>
<ul>
<li><p>Cheaper</p>
</li>
<li><p>Full control over configuration, accessibility, and visibility.</p>
</li>
<li><p>Easy plugins installation</p>
</li>
<li><p>Access logs</p>
</li>
<li><p>Perform any configuration changes</p>
</li>
<li><p>No boundary in choosing any instance type</p>
</li>
</ul>
<h2 id="heading-prerequisites">PREREQUISITES</h2>
<ul>
<li><p>AWS and Terraform Knowledge</p>
</li>
<li><p>AWS Credentials</p>
</li>
</ul>
<h2 id="heading-creating-elastic-stack-on-aws-ec2"><strong>Creating Elastic Stack on AWS EC2</strong></h2>
<p>Here we’ll create an elastic stack in a <strong>VPC</strong>, and set up <strong>Filebeat</strong> on EC2 which helps to view logs, and then, <strong>LogStash</strong> to apply some filters to the data/logs. All the logs are created and stored inside /var/log. <strong>Filebeat</strong> takes all these logs and sends them to LogStash. <strong>LogStash</strong> then applies filters to send them to ElasticSearch.</p>
<p>Finally, <strong>Kibana</strong> will be configured to display logs from <strong>ElasticSearch</strong>. Which can be accessed from the <strong>Kibana dashboard</strong>.</p>
<h2 id="heading-configuration-set-upimportant">Configuration Set-Up(<em>Important</em>)</h2>
<p>To set up the configuration for each component, first, we need to install components on our EC2 server. For the sake of simplicity, we’ll be using Terraform data_template method to replace the default file. And it must be done before starting the component.</p>
<ol>
<li><h3 id="heading-vpc-subnets-set-up"><strong>VPC, subnets set-up</strong></h3>
</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-comment">#basic setup</span>
resource <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"elastic_stack_vpc"</span>{
  cidr_block = cidrsubnet(<span class="hljs-string">"172.20.0.0/16"</span>,0,0)
  tags={
    Name=<span class="hljs-string">"example-elasticsearch_vpc"</span>
  }
}
resource <span class="hljs-string">"aws_internet_gateway"</span> <span class="hljs-string">"elastic_stack_ig"</span> {
  vpc_id = aws_vpc.elastic_vpc.id
  tags = {
    Name = <span class="hljs-string">"example_elasticsearch_igw"</span>
  }
}
resource <span class="hljs-string">"aws_route_table"</span> <span class="hljs-string">"elastic_stack_rt"</span> {
  vpc_id = aws_vpc.elastic_vpc.id
  route {
    cidr_block = <span class="hljs-string">"0.0.0.0/0"</span>
    gateway_id = aws_internet_gateway.elastic_internet_gateway.id
  }
  tags = {
    Name = <span class="hljs-string">"example_elasticsearch_rt"</span>
  }
}
resource <span class="hljs-string">"aws_main_route_table_association"</span> <span class="hljs-string">"elastic_stack_rt_main"</span> {
  vpc_id         = aws_vpc.elastic_vpc.id
  route_table_id = aws_route_table.elastic_rt.id
}
resource <span class="hljs-string">"aws_subnet"</span> <span class="hljs-string">"elastic_stack_subnet"</span>{
  for_each = {ap-south-1a=cidrsubnet(<span class="hljs-string">"172.20.0.0/16"</span>,8,10),ap-south-1b=cidrsubnet(<span class="hljs-string">"172.20.0.0/16"</span>,8,20)}
  vpc_id = aws_vpc.elastic_vpc.id
  availability_zone = each.key
  cidr_block = each.value
  tags={
    Name=<span class="hljs-string">"elasticsearch_subnet_<span class="hljs-variable">${each.key}</span>"</span>
  }
}
</code></pre>
<p>Now, Setup ElasticSearach Cluster:</p>
<ol start="2">
<li><h3 id="heading-elasticsearch-cluster-set-up"><strong>ElasticSearch cluster set-up</strong></h3>
</li>
</ol>
<p>In this, we’ll set up a two-masternode and one-datanode ElasticSearch cluster in different AZs. In the security group for ElasticSearch, add the inbound access rule to <strong>port 9200</strong>. This is required so that Kibana can access it.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># elasticsearch security group</span>
resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"elasticsearch_sg"</span> {
  vpc_id = var.vpc_id
  description = <span class="hljs-string">"ElasticSearch Security Group"</span>
  ingress {
    description = <span class="hljs-string">"ingress rules"</span>
    cidr_blocks = [ <span class="hljs-string">"0.0.0.0/0"</span> ]
    from_port = 22
    protocol = <span class="hljs-string">"tcp"</span>
    to_port = 22
  }
  ingress {
    description = <span class="hljs-string">"ingress rules"</span>
    from_port = 9200
    protocol = <span class="hljs-string">"tcp"</span>
    to_port = 9300
    security_groups = [aws_security_group.kibana_sg.id] <span class="hljs-comment"># Kibana security group to access ElasticSearch</span>
  }

  ingress {
    description = <span class="hljs-string">"ingress rules"</span>
    from_port = 9200
    protocol = <span class="hljs-string">"tcp"</span>
    to_port = 9300
    security_groups = [var.lambda_sg] <span class="hljs-comment"># If you're using lambda to access ES.</span>
  }

  egress {
    description = <span class="hljs-string">"egress rules"</span>
    from_port   = 0
    protocol    = <span class="hljs-string">"-1"</span>
    to_port     = 0
    cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
  }
  tags={
    Name=<span class="hljs-string">"elasticsearch_sg"</span>
  }
}
</code></pre>
<p>ElasticSearch master nodes:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Elastic-Search master nodes</span>
resource <span class="hljs-string">"aws_key_pair"</span> <span class="hljs-string">"elastic_ssh_key"</span> {
  key_name=<span class="hljs-string">"elasticsearch_ssh"</span>
  public_key= file(<span class="hljs-string">"~/.ssh/elasticsearch_keypair.pub"</span>)
}
resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"elastic_nodes"</span> {
  count                  = 2
  ami                    = var.elastic_aws_ami
  instance_type          = var.elastic_aws_instance_type
  <span class="hljs-comment"># subnet_id              = aws_subnet.elastic_subnet[var.azs[count.index]].id</span>
  subnet_id              = var.public_subnet_ids[count.index]
  vpc_security_group_ids = [aws_security_group.elasticsearch_sg.id]
  key_name               = aws_key_pair.elastic_ssh_key.key_name
  iam_instance_profile   = <span class="hljs-string">"<span class="hljs-variable">${aws_iam_instance_profile.elastic_ec2_instance_profile.name}</span>"</span>
  associate_public_ip_address = <span class="hljs-literal">true</span>
  tags = {
    Name = <span class="hljs-string">"elasticsearch dev node-<span class="hljs-variable">${count.index}</span>"</span>
  }
}
data <span class="hljs-string">"template_file"</span> <span class="hljs-string">"init_elasticsearch"</span> {
  depends_on = [ 
    aws_instance.elastic_nodes
  ]
  count          = 2
  template = file(<span class="hljs-string">"./elasticsearch/configs/elasticsearch_config.tpl"</span>)
  vars = {
    cluster_name = <span class="hljs-string">"elasticsearch_cluster"</span>
    node_name    = <span class="hljs-string">"node_<span class="hljs-variable">${count.index}</span>"</span>
    node         = aws_instance.elastic_nodes[count.index].private_ip
    node1        = aws_instance.elastic_nodes[0].private_ip
    node2        = aws_instance.elastic_nodes[1].private_ip
    node3        = aws_instance.elastic_datanodes[0].private_ip
  }
}

data <span class="hljs-string">"template_file"</span> <span class="hljs-string">"init_backupscript"</span> {
  depends_on = [ 
    aws_instance.elastic_nodes
  ]
  count          = 2
  template = file(<span class="hljs-string">"./elasticsearch/configs/s3_backup_script.tpl"</span>)
  vars = {
    cluster_name = <span class="hljs-string">"elasticsearch_cluster"</span>
    node         = aws_instance.elastic_nodes[count.index].private_ip
    node1        = aws_instance.elastic_nodes[0].private_ip
    node2        = aws_instance.elastic_nodes[1].private_ip
    node3        = aws_instance.elastic_datanodes[0].private_ip
  }
}

resource <span class="hljs-string">"aws_eip"</span> <span class="hljs-string">"elasticsearch_eip"</span>{
    count     = 2
    instance  = element(aws_instance.elastic_nodes.*.id, count.index)
    vpc       = <span class="hljs-literal">true</span>

    tags = {
    Name = <span class="hljs-string">"elasticsearch-eip-<span class="hljs-variable">${terraform.workspace}</span>-<span class="hljs-variable">${count.index + 1}</span>"</span>
  }
}

resource <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"move_es_file"</span> {
  count          = 2
  connection {
     <span class="hljs-built_in">type</span>        = <span class="hljs-string">"ssh"</span>
     user        = <span class="hljs-string">"ec2-user"</span>
     private_key = file(<span class="hljs-string">"~/.ssh/elasticsearch_keypair.pem"</span>)
     host        = aws_instance.elastic_nodes[count.index].public_ip
  } 
  provisioner <span class="hljs-string">"file"</span> {
    content      = data.template_file.init_elasticsearch[count.index].rendered
    destination  = <span class="hljs-string">"elasticsearch.yml"</span>
  }

  provisioner <span class="hljs-string">"file"</span> {
    content      = data.template_file.init_backupscript[count.index].rendered
    destination  = <span class="hljs-string">"s3_backup_script.sh"</span>

  }

}
resource <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"start_es"</span> {
  depends_on     = [ 
    null_resource.move_es_file
  ]
  count          = 2
  connection {
     <span class="hljs-built_in">type</span>        = <span class="hljs-string">"ssh"</span>
     user        = <span class="hljs-string">"ec2-user"</span>
     private_key = file(<span class="hljs-string">"~/.ssh/elasticsearch_keypair.pem"</span>)
     host        = aws_instance.elastic_nodes[count.index].public_ip
  }
  provisioner <span class="hljs-string">"remote-exec"</span> {
    inline = [
      <span class="hljs-string">"#!/bin/bash"</span>,
      <span class="hljs-string">"sudo yum update -y"</span>,
      <span class="hljs-string">"sudo yum install java-1.8.0 -y"</span>,
      <span class="hljs-string">"sudo rpm -i &lt;https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.1-x86_64.rpm&gt;"</span>,
      <span class="hljs-string">"sudo systemctl daemon-reload"</span>,
      <span class="hljs-string">"sudo systemctl enable elasticsearch.service"</span>,
      <span class="hljs-string">"sudo chmod -R 777 /etc/elasticsearch"</span>,
      <span class="hljs-string">"sudo sed -i 's@-Xms1g@-Xms<span class="hljs-variable">${aws_instance.elastic_nodes[count.index].root_block_device[0].volume_size/2}</span>g@g' /etc/elasticsearch/jvm.options"</span>,
      <span class="hljs-string">"sudo sed -i 's@-Xmx1g@-Xmx<span class="hljs-variable">${aws_instance.elastic_nodes[count.index].root_block_device[0].volume_size/2}</span>g@g' /etc/elasticsearch/jvm.options"</span>,
      <span class="hljs-comment"># "sudo sed -i 's/#network.host: 192.168.0.1/network.host: 0.0.0.0/g' /etc/elasticsearch/elasticsearch.yml",</span>
      <span class="hljs-string">"sudo rm /etc/elasticsearch/elasticsearch.yml"</span>,
      <span class="hljs-string">"sudo cp elasticsearch.yml /etc/elasticsearch/"</span>,
      <span class="hljs-string">"sudo systemctl start elasticsearch.service"</span>,
    ]
  }
}
</code></pre>
<pre><code class="lang-bash"><span class="hljs-comment"># Elastic-Search data nodes setup</span>
resource <span class="hljs-string">"aws_key_pair"</span> <span class="hljs-string">"elastic_datanode_ssh_key"</span> {
  key_name=<span class="hljs-string">"elasticsearch_datanode_ssh"</span>
  public_key= file(<span class="hljs-string">"~/.ssh/elasticsearch_keypair.pub"</span>)
}
resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"elastic_datanodes"</span> {
  count                  = 1
  ami                    = var.elastic_aws_ami
  instance_type          = var.elastic_aws_instance_type
  <span class="hljs-comment"># subnet_id              = aws_subnet.elastic_subnet[var.azs[count.index]].id</span>
  subnet_id              = var.public_subnet_ids[count.index]
  vpc_security_group_ids = [aws_security_group.elasticsearch_sg.id]
  key_name               = aws_key_pair.elastic_ssh_key.key_name
  iam_instance_profile   = <span class="hljs-string">"<span class="hljs-variable">${aws_iam_instance_profile.elastic_ec2_instance_profile.name}</span>"</span>
  associate_public_ip_address = <span class="hljs-literal">true</span>
  tags = {
    Name = <span class="hljs-string">"elasticsearch dev node-<span class="hljs-variable">${count.index + 2}</span>"</span>
  }
}
data <span class="hljs-string">"template_file"</span> <span class="hljs-string">"init_es_datanode"</span> {
  depends_on = [ 
    aws_instance.elastic_datanodes
  ]
  count          = 1
  template = file(<span class="hljs-string">"./elasticsearch/configs/elasticsearch_datanode_config.tpl"</span>)
  vars = {
    cluster_name = <span class="hljs-string">"elasticsearch_cluster"</span>
    node_name    = <span class="hljs-string">"datanode_<span class="hljs-variable">${count.index}</span>"</span>
    node         = aws_instance.elastic_datanodes[count.index].private_ip
    node1        = aws_instance.elastic_nodes[0].private_ip
    node2        = aws_instance.elastic_nodes[1].private_ip
    node3        = aws_instance.elastic_datanodes[0].private_ip
  }
}

data <span class="hljs-string">"template_file"</span> <span class="hljs-string">"init_backupscript_datanode"</span> {
  depends_on = [ 
    aws_instance.elastic_datanodes
  ]
  count          = 1
  template = file(<span class="hljs-string">"./elasticsearch/configs/s3_backup_script.tpl"</span>)
  vars = {
    cluster_name = <span class="hljs-string">"elasticsearch_cluster"</span>
    node         = aws_instance.elastic_datanodes[count.index].private_ip
    node1        = aws_instance.elastic_nodes[0].private_ip
    node2        = aws_instance.elastic_nodes[1].private_ip
    node3        = aws_instance.elastic_datanodes[0].private_ip
  }
}

<span class="hljs-comment"># Uncomment following if you want to attach elastic IP to your data nodes.</span>
<span class="hljs-comment"># resource "aws_eip" "elasticsearch_datanode_eip"{</span>
<span class="hljs-comment">#     count     = 1</span>
<span class="hljs-comment">#     instance  = element(aws_instance.elastic_datanodes.*.id, count.index)</span>
<span class="hljs-comment">#     vpc       = true</span>

<span class="hljs-comment">#     tags = {</span>
<span class="hljs-comment">#     Name = "elasticsearch-eip-datanode-${count.index + 1}"</span>
<span class="hljs-comment">#   }</span>
<span class="hljs-comment"># }</span>

resource <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"move_es_datanode_file"</span> {
  count          = 1
  connection {
     <span class="hljs-built_in">type</span>        = <span class="hljs-string">"ssh"</span>
     user        = <span class="hljs-string">"ec2-user"</span>
     private_key = file(<span class="hljs-string">"~/.ssh/elasticsearch_keypair.pem"</span>)
     host        = aws_instance.elastic_datanodes[count.index].public_ip
  } 
  provisioner <span class="hljs-string">"file"</span> {
    content      = data.template_file.init_es_datanode[count.index].rendered
    destination  = <span class="hljs-string">"elasticsearch_datanode.yml"</span>
  }

  provisioner <span class="hljs-string">"file"</span> {
    content      = data.template_file.init_backupscript_datanode[count.index].rendered
    destination  = <span class="hljs-string">"s3_backup_script.sh"</span>

  }

}
resource <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"start_es_datanodes"</span> {
  depends_on     = [ 
    null_resource.move_es_datanode_file
  ]
  count          = 1
  connection {
     <span class="hljs-built_in">type</span>        = <span class="hljs-string">"ssh"</span>
     user        = <span class="hljs-string">"ec2-user"</span>
     private_key = file(<span class="hljs-string">"~/.ssh/elasticsearch_keypair.pem"</span>)
     host        = aws_instance.elastic_datanodes[count.index].public_ip
  }
  provisioner <span class="hljs-string">"remote-exec"</span> {
    inline = [
      <span class="hljs-string">"#!/bin/bash"</span>,
      <span class="hljs-string">"sudo yum update -y"</span>,
      <span class="hljs-string">"sudo yum install java-1.8.0 -y"</span>,
      <span class="hljs-string">"sudo rpm -i &lt;https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.1-x86_64.rpm&gt;"</span>,
      <span class="hljs-string">"sudo systemctl daemon-reload"</span>,
      <span class="hljs-string">"sudo systemctl enable elasticsearch.service"</span>,
      <span class="hljs-string">"sudo chmod -R 777 /etc/elasticsearch"</span>,
      <span class="hljs-string">"sudo sed -i 's@-Xms1g@-Xms<span class="hljs-variable">${aws_instance.elastic_datanodes[count.index].root_block_device[0].volume_size/2}</span>g@g' /etc/elasticsearch/jvm.options"</span>,
      <span class="hljs-string">"sudo sed -i 's@-Xmx1g@-Xmx<span class="hljs-variable">${aws_instance.elastic_datanodes[count.index].root_block_device[0].volume_size/2}</span>g@g' /etc/elasticsearch/jvm.options"</span>,
      <span class="hljs-comment"># "sudo sed -i 's/#network.host: 192.168.0.1/network.host: 0.0.0.0/g' /etc/elasticsearch/elasticsearch.yml",</span>
      <span class="hljs-string">"sudo rm /etc/elasticsearch/elasticsearch.yml"</span>,
      <span class="hljs-string">"sudo cp elasticsearch_datanode.yml /etc/elasticsearch/elasticsearch.yml"</span>,
      <span class="hljs-string">"sudo systemctl start elasticsearch.service"</span>
    ]
  }
}
</code></pre>
<ol start="3">
<li><h3 id="heading-set-up-kibana"><strong>Set-up Kibana</strong></h3>
</li>
</ol>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"kibana_sg"</span> {
  vpc_id = aws_vpc.elastic_stack_vpc.id
  ingress {
    description = <span class="hljs-string">"ingress rules"</span>
    cidr_blocks = [ <span class="hljs-string">"0.0.0.0/0"</span> ]
    from_port = 22
    protocol = <span class="hljs-string">"tcp"</span>
    to_port = 22
  }
  ingress {
    description = <span class="hljs-string">"ingress rules"</span>
    cidr_blocks = [ <span class="hljs-string">"0.0.0.0/0"</span> ]
    from_port = 5601
    protocol = <span class="hljs-string">"tcp"</span>
    to_port = 5601
  }
  egress {
    description = <span class="hljs-string">"egress rules"</span>
    cidr_blocks = [ <span class="hljs-string">"0.0.0.0/0"</span> ]
    from_port = 0
    protocol = <span class="hljs-string">"-1"</span>
    to_port = 0
  }
  tags={
    Name=<span class="hljs-string">"kibana_security_group"</span>
  }
}
</code></pre>
<p>Kibana:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Kibana setup</span>
resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"kibana"</span> {
  depends_on = [ 
    null_resource.start_es
   ]
  ami                    = <span class="hljs-string">"ami-0ed9277fb7eb570c9"</span>
  instance_type          = <span class="hljs-string">"t2.small"</span>
  subnet_id              = aws_subnet.elastic_subnet[var.az_name[0]].id
  vpc_security_group_ids = [aws_security_group.kibana_sg.id]
  key_name               = aws_key_pair.elastic_ssh_key.key_name
  associate_public_ip_address = <span class="hljs-literal">true</span>
  tags = {
    Name = <span class="hljs-string">"kibana"</span>
  }
}
data <span class="hljs-string">"template_file"</span> <span class="hljs-string">"init_kibana"</span> {
  depends_on = [ 
    aws_instance.kibana
  ]
  template = file(<span class="hljs-string">"./configs/kibana_config.tpl"</span>)
  vars = {
    elasticsearch = aws_instance.elastic_nodes[0].public_ip
  }
}
resource <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"move_kibana_file"</span> {
  depends_on = [ 
    aws_instance.kibana
   ]
  connection {
     <span class="hljs-built_in">type</span>        = <span class="hljs-string">"ssh"</span>
     user        = <span class="hljs-string">"ec2-user"</span>
     private_key = file(<span class="hljs-string">"elasticsearch_keypair.pem"</span>)
     host        = aws_instance.kibana.public_ip
  } 
  provisioner <span class="hljs-string">"file"</span> {
    content     = data.template_file.init_kibana.rendered
    destination = <span class="hljs-string">"kibana.yml"</span>
  }
}

resource <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"install_kibana"</span> {
  depends_on = [ 
      aws_instance.kibana
   ]
  connection {
    <span class="hljs-built_in">type</span>        = <span class="hljs-string">"ssh"</span>
    user        = <span class="hljs-string">"ec2-user"</span>
    private_key = file(<span class="hljs-string">"elasticsearch_keypair.pem"</span>)
    host        = aws_instance.kibana.public_ip
  } 
  provisioner <span class="hljs-string">"remote-exec"</span> {
    inline = [
      <span class="hljs-string">"sudo yum update -y"</span>,
      <span class="hljs-string">"sudo rpm -i &lt;https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.1-x86_64.rpm&gt;"</span>,
      <span class="hljs-string">"sudo rm /etc/kibana/kibana.yml"</span>,
      <span class="hljs-string">"sudo cp kibana.yml /etc/kibana/"</span>,
      <span class="hljs-string">"sudo systemctl start kibana"</span>
    ]
  }
}
</code></pre>
<ol start="4">
<li><h3 id="heading-set-up-logstash"><strong>Set-up LogStash</strong></h3>
</li>
</ol>
<p>This takes up logs sent by Filebeats and applies filters to them before sending them to ElasticSearch. For this, add the inbound rule to <strong>port 5044</strong>.</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"logstash_sg"</span> {
  vpc_id = aws_vpc.elastic_stack_vpc.id
  ingress {
    description = <span class="hljs-string">"ingress rules"</span>
    cidr_blocks = [ <span class="hljs-string">"0.0.0.0/0"</span> ]
    from_port = 22
    protocol = <span class="hljs-string">"tcp"</span>
    to_port = 22
  }
  ingress {
    description = <span class="hljs-string">"ingress rules"</span>
    cidr_blocks = [ <span class="hljs-string">"0.0.0.0/0"</span> ]
    from_port = 5044
    protocol = <span class="hljs-string">"tcp"</span>
    to_port = 5044
  }
  egress {
    description = <span class="hljs-string">"egress rules"</span>
    cidr_blocks = [ <span class="hljs-string">"0.0.0.0/0"</span> ]
    from_port = 0
    protocol = <span class="hljs-string">"-1"</span>
    to_port = 0
  }
  tags={
    Name=<span class="hljs-string">"logstash_sg"</span>
  }
}
</code></pre>
<p>LogStash:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"logstash"</span> {
  depends_on = [ 
    null_resource.install_kibana
   ]
  ami                    = <span class="hljs-string">"ami-04d29b6f966df1537"</span>
  instance_type          = <span class="hljs-string">"t2.large"</span>
  subnet_id              = aws_subnet.elastic_subnet[var.az_name[0]].id
  vpc_security_group_ids = [aws_security_group.logstash_sg.id]
  key_name               = aws_key_pair.elastic_ssh_key.key_name
  associate_public_ip_address = <span class="hljs-literal">true</span>
  tags = {
    Name = <span class="hljs-string">"logstash"</span>
  }
}

data <span class="hljs-string">"template_file"</span> <span class="hljs-string">"init_logstash"</span> {
  depends_on = [ 
    aws_instance.logstash
  ]
  template = file(<span class="hljs-string">"./logstash_config.tpl"</span>)
  vars = {
    elasticsearch = aws_instance.elastic_nodes[0].public_ip
  }
}

resource <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"move_logstash_file"</span> {
  depends_on = [ 
    aws_instance.logstash
   ]
  connection {
     <span class="hljs-built_in">type</span>        = <span class="hljs-string">"ssh"</span>
     user        = <span class="hljs-string">"ec2-user"</span>
     private_key = file(<span class="hljs-string">"tf-kp.pem"</span>)
     host        = aws_instance.logstash.public_ip
  } 
  provisioner <span class="hljs-string">"file"</span> {
    content     = data.template_file.init_logstash.rendered
    destination = <span class="hljs-string">"logstash.conf"</span>
  }
}

resource <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"install_logstash"</span> {
  depends_on = [ 
      aws_instance.logstash
   ]
  connection {
    <span class="hljs-built_in">type</span>        = <span class="hljs-string">"ssh"</span>
    user        = <span class="hljs-string">"ec2-user"</span>
    private_key = file(<span class="hljs-string">"tf-kp.pem"</span>)
    host        = aws_instance.logstash.public_ip
  } 
  provisioner <span class="hljs-string">"remote-exec"</span> {
    inline = [
      <span class="hljs-string">"sudo yum update -y &amp;&amp; sudo yum install java-1.8.0-openjdk -y"</span>,
      <span class="hljs-string">"sudo rpm -i &lt;https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.1-x86_64.rpm&gt;"</span>,
      <span class="hljs-string">"sudo cp logstash.conf /etc/logstash/conf.d/logstash.conf"</span>,
      <span class="hljs-string">"sudo systemctl start logstash.service"</span>
    ]
  }
}
</code></pre>
<ol start="5">
<li><h3 id="heading-filebeat-setup"><strong>Filebeat Setup</strong></h3>
</li>
</ol>
<p>It takes logs from <code>/var/logs/</code> and sends them to LogStash on <strong>port 5044</strong>.</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"filebeat_sg"</span> {
  vpc_id = aws_vpc.elastic_stack_vpc.id
  ingress {
    description = <span class="hljs-string">"ingress rules"</span>
    cidr_blocks = [ <span class="hljs-string">"0.0.0.0/0"</span> ]
    from_port = 22
    protocol = <span class="hljs-string">"tcp"</span>
    to_port = 22
  }
  egress {
    description = <span class="hljs-string">"egress rules"</span>
    cidr_blocks = [ <span class="hljs-string">"0.0.0.0/0"</span> ]
    from_port = 0
    protocol = <span class="hljs-string">"-1"</span>
    to_port = 0
  }
  tags={
    Name=<span class="hljs-string">"filebeat_sg"</span>
  }
}
</code></pre>
<p>Filebeat:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"filebeat"</span> {
  depends_on = [ 
    null_resource.install_logstash
   ]
  ami                    = <span class="hljs-string">"ami-04d29b6f966df1537"</span>
  instance_type          = <span class="hljs-string">"t2.large"</span>
  subnet_id = aws_subnet.elastic_subnet[var.az_name[0]].id
  vpc_security_group_ids = [aws_security_group.filebeat_sg.id]
  key_name               = aws_key_pair.elastic_ssh_key.key_name
  associate_public_ip_address = <span class="hljs-literal">true</span>
  tags = {
    Name = <span class="hljs-string">"filebeat"</span>
  }
}

resource <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"move_filebeat_file"</span> {
  depends_on = [ 
    aws_instance.filebeat
   ]
  connection {
     <span class="hljs-built_in">type</span>        = <span class="hljs-string">"ssh"</span>
     user        = <span class="hljs-string">"ec2-user"</span>
     private_key = file(<span class="hljs-string">"tf-kp.pem"</span>)
     host        = aws_instance.filebeat.public_ip
  } 
  provisioner <span class="hljs-string">"file"</span> {
    <span class="hljs-built_in">source</span>      = <span class="hljs-string">"filebeat.yml"</span>
    destination = <span class="hljs-string">"filebeat.yml"</span>
  }
}

resource <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"install_filebeat"</span> {
  depends_on = [ 
    null_resource.move_filebeat_file
   ]
  connection {
    <span class="hljs-built_in">type</span>        = <span class="hljs-string">"ssh"</span>
    user        = <span class="hljs-string">"ec2-user"</span>
    private_key = file(<span class="hljs-string">"tf-kp.pem"</span>)
    host        = aws_instance.filebeat.public_ip
  } 
  provisioner <span class="hljs-string">"remote-exec"</span> {
    inline = [
      <span class="hljs-string">"sudo yum update -y"</span>,
      <span class="hljs-string">"sudo rpm -i &lt;https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.1-x86_64.rpm&gt;"</span>,
      <span class="hljs-string">"sudo sed -i 's@kibana_ip@<span class="hljs-variable">${aws_instance.kibana.public_ip}</span>@g' filebeat.yml"</span>,
      <span class="hljs-string">"sudo sed -i 's@logstash_ip@<span class="hljs-variable">${aws_instance.logstash.public_ip}</span>@g' filebeat.yml"</span>,
      <span class="hljs-string">"sudo rm /etc/filebeat/filebeat.yml"</span>,
      <span class="hljs-string">"sudo cp filebeat.yml /etc/filebeat/"</span>,
      <span class="hljs-string">"sudo systemctl start filebeat.service"</span>
    ]
  }
}
</code></pre>
<p>Now, you have successfully set up Elastic Stack on AWS EC2 using Terraform.</p>
<p>Visit, <strong><em>&lt;public_ip_of_any_es_node&gt;:9200/_cluster/health</em></strong> to see ElasticSearch Status</p>
<p>Visit, <strong><em>&lt;public_ip_of_any_es_node&gt;:9200/_cat/nodes?v</em></strong> to see ElasticSearch nodes</p>
<p>Visit, <strong><em>&lt;public_ip_of_kibana_instance&gt;:5601/_cluster/health</em></strong> to see Kibana</p>
<p>After accessing Kibana, go to <strong>settings &gt; Index Patterns &gt; Add the logstash index</strong></p>
<p>To check logs, SSH into each component and run the command:</p>
<pre><code class="lang-bash">$ sudo systemctl status &lt;component-name&gt; -l
</code></pre>
<p>Then SSH int Filebeat EC2 instance and add sample <code>.log</code> file inside <code>/var/log/</code> . And you can search for the logs inside the Kibana dashboard.</p>
<p>For a sample log run the following and see a record on Kibana:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"echo 'This is a sample log for test' &gt;&gt; /var/log/test-log.log"</span> | sudo bash
</code></pre>
<p>That’s It.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Congratulations, you have successfully learned about how to set up Elastic Stack on AWS EC2 using Terraform.</p>
<p>Thank You!</p>
]]></content:encoded></item><item><title><![CDATA[Docker For Beginners: Learning Notes]]></title><description><![CDATA[What is Docker?
Docker is a Containerization platform that simplifies packaging, deploying, and running applications. It bundles applications and their dependencies into CONTAINERS, ensuring consistent behavior across different environments. Docker e...]]></description><link>https://blog.budhathokisagar.com.np/docker-for-beginners-cheatsheet</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/docker-for-beginners-cheatsheet</guid><category><![CDATA[Docker]]></category><category><![CDATA[Devops]]></category><category><![CDATA[docker images]]></category><category><![CDATA[docker-architecture]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[containers]]></category><category><![CDATA[containerization]]></category><category><![CDATA[Docker compose]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[architecture]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Fri, 28 Jun 2024 06:31:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1719983871660/c8f9b69c-31cf-4e5d-af0a-3e415b839b64.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-docker">What is Docker?</h2>
<p>Docker is a <strong>Containerization</strong> platform that simplifies packaging, deploying, and running applications. It bundles applications and their dependencies into <strong>CONTAINERS</strong>, ensuring consistent behavior across different environments. Docker enhances efficiency and reliability, supports microservices and scalable applications, and provides tools for managing containers and secure environments. Dockerfiles define application environments, and container images can be shared across teams. Docker revolutionizes modern software development by improving development, testing, and deployment processes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719554089837/fa7ec089-7e58-41e9-b7e0-6c488a2b062c.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-what-is-containerization">What is containerization? 📦</h2>
<p><strong>Containerization</strong> is a lightweight form of virtualization that encapsulates an application and its dependencies into a unit called a "container." This container includes everything needed to run the application, ensuring it works consistently across different environments. Unlike traditional virtual machines, containers share the host system's OS kernel, making them more efficient and faster to start. This approach simplifies deployment, enhances scalability and portability, and enables rapid, reliable development.</p>
<hr />
<p>By isolating applications, containerization minimizes software conflicts and streamlines management.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719553968144/20b7900f-8f09-4aea-903b-f57accd76864.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-difference-between-vms-amp-containers">Difference between VMs &amp; Containers</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719554423619/aafbe17e-4608-457b-95bd-5aa7e4b0b6cb.png" alt class="image--center mx-auto" /></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Aspect</td><td>Containers</td><td>Virtual Machines</td></tr>
</thead>
<tbody>
<tr>
<td>OS</td><td>Share host’s Kernel</td><td>Has its own Kernel</td></tr>
<tr>
<td>Resource Usage</td><td>Lightweight, efficient</td><td>Heavier, more resource usage</td></tr>
<tr>
<td>Startup Time</td><td>Quick start</td><td>Slower start</td></tr>
<tr>
<td>Isolation</td><td>Process-level separation</td><td>Full OS isolation</td></tr>
<tr>
<td>Portability</td><td>Highly portable</td><td>Compatibility concerns</td></tr>
<tr>
<td>Resource Overhead</td><td>Minimal overhead</td><td>Higher overhead</td></tr>
<tr>
<td>Isolation Level</td><td>Lighter isolation</td><td>Stronger isolation</td></tr>
</tbody>
</table>
</div><p>Learning Resource:</p>
<p><a target="_blank" href="https://aws.amazon.com/compare/the-difference-between-containers-and-virtual-machines/">Containers vs VM - Difference Between Deployment Technologies - AWS</a></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">It’s important to note that Docker containers don’t run in their own virtual machines, but share a Linux kernel. Compared to virtual machines, containers use less memory and less CPU.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">However, a Linux runtime is required for Docker. Implementations on non-Linux platforms such as macOS and Windows 10 use a single Linux virtual machine. The containers share this system.</div>
</div>

<h2 id="heading-advantages-of-containerization"><strong>Advantages of Containerization</strong></h2>
<ul>
<li><p>Increased <strong>Portability</strong></p>
</li>
<li><p>Easier <strong>Scalability</strong></p>
</li>
<li><p>Easy and Fast <strong>Deployments</strong></p>
</li>
<li><p>Better <strong>Productivity</strong></p>
</li>
<li><p>Improved <strong>Security</strong></p>
</li>
<li><p>Consistent test environment for development and QA.</p>
</li>
<li><p>Cross-platform packages called images.</p>
</li>
<li><p>Isolation and encapsulation of application dependencies.</p>
</li>
<li><p>Ability to scale efficiently, easily, and in real time.</p>
</li>
<li><p>Enhances efficiency via easy reuse of images.</p>
</li>
</ul>
<h2 id="heading-disadvantage"><strong>Disadvantage</strong></h2>
<ul>
<li>Compatibility issue: <em>Windows container won’t run on Linux machines and vice-versa</em></li>
</ul>
<h3 id="heading-other-disadvantagesim-marcopolo-of-these-discoveries">Other disadvantages(<em>I’m Marcopolo of these discoveries</em>) 😎</h3>
<ul>
<li><p>Counter-productivity or efficiency-draining issue: <em>Hard to turn a 5-minute task into a 5-hour task</em></p>
</li>
<li><p>Troubleshooting issue: <em>It will be hard, to be able to find tons of dependency issues</em></p>
</li>
</ul>
<h2 id="heading-installation">Installation:</h2>
<p><a target="_blank" href="https://docs.docker.com/get-docker/">Get Docker</a></p>
<h2 id="heading-docker-architecture">Docker Architecture</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719554551088/27fee8e4-63e4-43f1-a12e-54e25c9f16af.png" alt class="image--center mx-auto" /></p>
<p>Docker uses a client-server architecture to manage and run containers:</p>
<ol>
<li><p><strong>Docker Client:</strong></p>
<ul>
<li><p>The Docker client is the command-line interface (CLI) or graphical user interface (GUI) that users interact with to build, manage, and control Docker containers.</p>
</li>
<li><p>It sends commands to the Docker daemon to perform various tasks.</p>
</li>
</ul>
</li>
<li><p><strong>Docker Daemon:</strong></p>
<ul>
<li><p>The Docker daemon is a background process that manages Docker containers on a host system.</p>
</li>
<li><p>It listens for Docker API requests and takes care of building, running, and managing containers.</p>
</li>
</ul>
</li>
<li><p><strong>Docker Registry:</strong></p>
<ul>
<li><p>Docker images can be stored and shared through Docker registries.</p>
</li>
<li><p>A Docker registry is a repository for Docker images, and it can be public (like Docker Hub) or private.</p>
</li>
</ul>
</li>
<li><p><strong>Docker Hub:</strong></p>
<ul>
<li><p>Docker Hub is a cloud-based registry service provided by Docker, where users can find, share, and store Docker images.</p>
</li>
<li><p>It serves as a central repository for Docker images.</p>
</li>
</ul>
</li>
</ol>
<p>Here's a high-level overview of how Docker components interact:</p>
<ul>
<li><p>The Docker client sends commands to the Docker daemon and receives information about containers and images.</p>
</li>
<li><p>Docker images are fetched or built from the Docker registry.</p>
</li>
<li><p>The Docker daemon handles the creation, starting, stopping, and management of containers.</p>
</li>
</ul>
<h2 id="heading-docker-workflow">Docker Workflow</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719554591303/e2da1e24-0567-4398-8c05-29bd44debc46.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-more-on-docker-ufffsss">More on Docker? - Ufffsss!!!!!</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719555011620/aafbb350-fa1a-41ef-95b5-beca842a3103.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-dockerfile-instructions">Dockerfile Instructions</h2>
<p>Please <a target="_blank" href="https://docs.docker.com/reference/dockerfile/">click here</a>.</p>
<h2 id="heading-docker-image">Docker Image</h2>
<p>A Docker image is a read-only template with instructions to create a container on the Docker platform. It is the starting point for anyone new to Docker.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719555216118/5f3f6bc1-9559-4062-af54-5a5b73f83213.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-time-for-a-hands-on-yeahhhhh">Time for a Hands-On? - YEAHHHHH!!!!!</h2>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719555272783/1dd90110-89ba-4ec4-90e0-d81895d96f67.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-challenges">Challenges 😎</h2>
<h3 id="heading-challenge-1"><strong>Challenge 1</strong></h3>
<p>Run a container with the <code>nginx:1.14-alpine</code> image and name it <code>webapp</code></p>
<pre><code class="lang-bash">docker run -p 5000:80 --name webapp -d nginx:1.14-alpine
</code></pre>
<h3 id="heading-challenge-2"><strong>Challenge 2</strong></h3>
<p>Containerize Python application and push the image to DockerHub</p>
<p><strong>Step 1</strong> - Create Python/NodeJS app. (Clone from GitHub) =&gt;<a target="_blank" href="https://github.com/sbmagar/luckydrawapp-python">Python</a> OR <a target="_blank" href="https://github.com/sbmagar/luckydrawapp-nodejs">NodeJS</a></p>
<p><strong>Step 2</strong> - Write Dockerfile for the app</p>
<p><strong>Step 3</strong> - Create image for the app</p>
<p><strong>Step 4</strong> - Run the container for the app</p>
<p><strong>Step 5</strong> - If it works push the image on DockerHub</p>
<p><strong>ENV variables</strong></p>
<ul>
<li><p><strong>Purpose</strong>: Environment variables in Docker are used to configure applications, control runtime behavior, and manage sensitive information.</p>
</li>
<li><p><strong>Configuration</strong>: They replace hardcoded values in configuration files, enabling flexibility across different environments.</p>
</li>
<li><p><strong>Dynamic Behavior</strong>: Environment variables can control feature toggles, logging levels, and runtime environments.</p>
</li>
<li><p><strong>Secrets Management</strong>: Sensitive data like passwords or API keys can be securely injected into containers using environment variables.</p>
</li>
<li><p><strong>Setting Variables</strong>:</p>
<ul>
<li><p>Use <code>ENV</code> instruction in Dockerfile to set variables during image build.</p>
</li>
<li><p>Pass variables with <code>-e</code> or <code>--env</code> flag in <code>docker run</code> command.</p>
</li>
<li><p>Define them in <code>docker-compose.yml</code> under the <code>environment</code> key.</p>
</li>
<li><p>In Docker Swarm, set them with <code>docker service create/update</code> or in a Docker Compose file for Swarm.</p>
</li>
</ul>
</li>
<li><p><strong>Flexibility and Portability</strong>: Environment variables make Dockerized applications easier to manage and deploy across diverse environments.</p>
</li>
</ul>
<h3 id="heading-challenge-3">Challenge 3</h3>
<p>Run a container named <code>shrawan-app</code> using image <code>sbmagar/blogging-app</code> and set the environment variable <code>APP_COLOR</code> to <code>green</code>. Make the application available on port <code>75666</code> on the host. The application listens on port <code>5000</code>.</p>
<ul>
<li><p>Solution</p>
<pre><code class="lang-jsx">  docker run -d \\
  --name shrawan-app \\
  -p <span class="hljs-number">75666</span>:<span class="hljs-number">5000</span> \\
  -e APP_COLOR=green \\
  sbmagar/blogging-app
</code></pre>
</li>
</ul>
<h2 id="heading-commands-amp-arguments">Commands &amp; Arguments</h2>
<p>Here, I'll just talk about main two arguments: <code>CMD</code> and <code>ENTRYPOINT</code> 😎:</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Always remember that a container does not host an operating system; instead, it runs a process and will be terminated once the process is completed.</div>
</div>

<h3 id="heading-cmd">CMD</h3>
<pre><code class="lang-docker"><span class="hljs-comment"># Use a base image</span>
<span class="hljs-keyword">FROM</span> alpine:latest

<span class="hljs-comment"># Run a sleep command when the container starts</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"sleep"</span>, <span class="hljs-string">"3600"</span>]</span>
</code></pre>
<p><code>CMD ["sleep", "3600"]</code> ✅</p>
<p><code>CMD ["sleep 3600"]</code> ❌</p>
<p>it's recommended to use the first form (<code>CMD ["sleep", "3600"]</code>) to specify the command and its arguments as separate elements in a JSON array for clarity and to ensure proper execution.</p>
<h3 id="heading-entrypoint">ENTRYPOINT</h3>
<p><code>ENTRYPOINT</code> is a Dockerfile instruction that sets the main command to run when a container starts. It ensures the specified command is executed, unlike <code>CMD</code> which provides default arguments to the command.</p>
<pre><code class="lang-docker"><span class="hljs-keyword">FROM</span> alpine:latest

<span class="hljs-comment"># Set the sleep command as the entry point</span>
<span class="hljs-keyword">ENTRYPOINT</span><span class="bash"> [<span class="hljs-string">"sleep"</span>]</span>

<span class="hljs-comment"># Set a default sleep time of 3600 seconds (1 hour)</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"3600"</span>]</span>
</code></pre>
<p>Explanation:</p>
<ul>
<li><p>This Dockerfile starts with a base image of Alpine Linux.</p>
</li>
<li><p>The <code>ENTRYPOINT</code> instruction specifies that the <code>sleep</code> command will be the main command to run when the container starts.</p>
</li>
<li><p>The <code>CMD</code> instruction sets a default argument for the <code>sleep</code> command, specifying the sleep time in seconds. In this case, the default sleep time is 3600 seconds (1 hour).</p>
</li>
</ul>
<p>Override arguments</p>
<pre><code class="lang-docker">docker <span class="hljs-keyword">run</span><span class="bash"> my_image **1800**   <span class="hljs-comment"># Sleeps for 1800 seconds (30 minutes)</span></span>
</code></pre>
<p>Communication Between containers.</p>
<p>for multiple containers dependent on one another we can command line argument —link</p>
<p>When using the <code>--link</code> option in Docker:</p>
<ul>
<li><p>A secure tunnel is created between containers for communication.</p>
</li>
<li><p>Environment variables are set in the destination container, providing details about the linked container.</p>
</li>
<li><p>Docker updates the <code>/etc/hosts</code> file in the destination container to resolve the hostname of the linked container.</p>
</li>
<li><p>Access to exposed ports in the linked container is provided.</p>
</li>
<li><p><strong>Example</strong></p>
<ol>
<li><p><strong>Run MySQL Container</strong>: Start the MySQL container with a name <code>mysql-container</code>, exposing port 3306:</p>
<pre><code class="lang-bash"> docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=password -d mysql:latest
</code></pre>
</li>
<li><p><strong>Create a .NET Core Application</strong>: Assume you have a .NET Core application that needs to connect to the MySQL database. Build the .NET Core application and create a Docker image for it. Here's a simple Dockerfile assuming the application is published to a folder named <code>app</code>:</p>
<pre><code class="lang-docker"> <span class="hljs-keyword">FROM</span> mcr.microsoft.com/dotnet/core/runtime:latest
 <span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
 <span class="hljs-keyword">COPY</span><span class="bash"> ./app .</span>
 <span class="hljs-keyword">ENTRYPOINT</span><span class="bash"> [<span class="hljs-string">"dotnet"</span>, <span class="hljs-string">"YourApp.dll"</span>]</span>
</code></pre>
</li>
<li><p><strong>Run .NET Core Application Container Linked to MySQL</strong>: Now, run the .NET Core application container, linking it to the MySQL container:</p>
<pre><code class="lang-bash"> docker run --name dotnet-app --link mysql-container:mysql -d your-dotnet-image:latest
</code></pre>
</li>
</ol>
</li>
</ul>
<p>    In this example:</p>
<ul>
<li><p><code>-name mysql-container</code> names the MySQL container <code>mysql-container</code>.</p>
</li>
<li><p><code>e MYSQL_ROOT_PASSWORD=password</code> sets the MySQL root password.</p>
</li>
<li><p><code>-name dotnet-app</code> names the .NET Core application container <code>dotnet-app</code>.</p>
</li>
<li><p><code>-link mysql-container:mysql</code> links the .NET Core application container to the MySQL container with the alias <code>mysql</code>.</p>
</li>
<li><p><code>d</code> runs both containers in detached mode.</p>
</li>
</ul>
<p>    Inside the .NET Core application container, you can access the MySQL database using the hostname <code>mysql</code> and the exposed port. Ensure your .NET Core application is set up to connect to MySQL using the correct hostname and port.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Docker version 1.9 considers the <code>--link</code> option a legacy feature and recommends using user-defined networks for better isolation, scalability, and ease of use in inter-container communication.</div>
</div>

<h2 id="heading-docker-compose"><strong>Docker Compose</strong></h2>
<p>To run multiple containers with one command, use a configuration file. Here are some Docker commands:</p>
<ul>
<li><p><code>docker run --name redis redis:alpine</code></p>
</li>
<li><p><code>docker run --name redis -d redis:alpine</code></p>
</li>
<li><p><code>docker rm redis</code></p>
</li>
<li><p><code>docker run --name redis -d redis:alpine</code></p>
</li>
<li><p><code>docker run --name luckydrawapp --link redis:redis -p 5000:5000 luckydraw-app:latest</code></p>
</li>
<li><p><code>docker rm luckydrawapp</code></p>
</li>
<li><p><code>docker run --name luckydrawapp --link redis:redis -d -p 8085:5000 luckydraw-app:latest</code></p>
</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3.0'</span>
<span class="hljs-attr">services:</span>
    <span class="hljs-attr">redis:</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">redis:alpine</span>
    <span class="hljs-attr">luckydrawapp:</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">luckydraw-app:latest</span>
        <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-number">5000</span><span class="hljs-string">:5000</span>
</code></pre>
<hr />
<h2 id="heading-docker-volumes">Docker Volumes</h2>
<p>Docker volumes allow you to save data created and used by Docker containers. They enable data sharing between a host machine and Docker containers or between different containers.</p>
<h3 id="heading-types-of-volumes">Types of Volumes</h3>
<ol>
<li><p><strong>Named Volumes:</strong> Managed by Docker, easier to use and manage.</p>
</li>
<li><p><strong>Host Volumes:</strong> Maps a directory from the host machine into the container.</p>
</li>
<li><p><strong>Anonymous Volumes:</strong> Similar to named volumes but managed by Docker with a randomly generated name.</p>
</li>
</ol>
<h3 id="heading-commands">Commands:</h3>
<ol>
<li><p><strong>Create a named volume:</strong></p>
<pre><code class="lang-bash"> docker volume create my_volume
</code></pre>
</li>
<li><p><strong>Run a container with a named volume:</strong></p>
<pre><code class="lang-bash"> docker run -v my_volume:/path/<span class="hljs-keyword">in</span>/container image_name
</code></pre>
</li>
<li><p><strong>List all volumes:</strong></p>
<pre><code class="lang-bash"> docker volume ls
</code></pre>
</li>
<li><p><strong>Inspect a volume:</strong></p>
<pre><code class="lang-bash"> docker volume inspect my_volume
</code></pre>
</li>
<li><p><strong>Remove a volume:</strong></p>
<pre><code class="lang-bash"> docker volume rm my_volume
</code></pre>
</li>
<li><p><strong>Mount a host directory as a volume:</strong></p>
<pre><code class="lang-bash"> docker run -v /host/path:/container/path image_name
</code></pre>
</li>
</ol>
<h3 id="heading-dockerfile-example">Dockerfile Example</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Define a volume</span>
VOLUME /data

<span class="hljs-comment"># Set working directory</span>
WORKDIR /data

<span class="hljs-comment"># Copy files into the container</span>
COPY . /data
</code></pre>
<h3 id="heading-docker-compose-example">Docker Compose Example</h3>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3.8'</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">app:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">my_app_image</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">my_volume:/app/data</span>

<span class="hljs-attr">volumes:</span>
  <span class="hljs-attr">my_volume:</span>
    <span class="hljs-attr">external:</span> <span class="hljs-literal">true</span>
</code></pre>
<h2 id="heading-key-points">Key Points</h2>
<ul>
<li><p>Volumes are useful for persisting data even if containers are removed.</p>
</li>
<li><p>They can be shared between containers.</p>
</li>
<li><p>Docker volumes are stored in a part of the host filesystem managed by Docker.</p>
</li>
</ul>
<hr />
<p>These notes provide a solid overview of Docker volumes, including practical examples and commands. They cover named, host, and anonymous volumes, along with how to create, run, list, inspect, and remove volumes. Examples using Dockerfile and Docker Compose are also included. Let me know if you need more details on any topic!</p>
<p><a target="_blank" href="https://docs.docker.com/compose/">https://docs.docker.com/compose/</a></p>
<p><a target="_blank" href="https://docs.docker.com/engine/reference/commandline/compose/">https://docs.docker.com/engine/reference/commandline/compose/</a></p>
]]></content:encoded></item><item><title><![CDATA[Local Set Up Kubernetes with Minikube]]></title><description><![CDATA[In this article, you’ll learn about Kubernetes's local setup with Minikube so you can use a local Kubernetes instance for your development environment.
What you’ll get
You’ll learn to:

Install a local Kubernetes instance by using minikube on Linux, ...]]></description><link>https://blog.budhathokisagar.com.np/local-set-up-kubernetes-with-minikube</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/local-set-up-kubernetes-with-minikube</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[docker images]]></category><category><![CDATA[operating system]]></category><category><![CDATA[minikube]]></category><category><![CDATA[virtual machine]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Thu, 27 Jun 2024 11:12:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1719486991635/f35efa5a-83b4-4db9-9b88-0398ccc612b2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, you’ll learn about Kubernetes's local setup with Minikube so you can use a local Kubernetes instance for your development environment.</p>
<h2 id="heading-what-youll-get">What you’ll get</h2>
<p>You’ll learn to:</p>
<ul>
<li><p>Install a local Kubernetes instance by using <code>minikube</code> on Linux, macOS, or Windows.</p>
</li>
<li><p>Expose ingress for services outside of the Kubernetes cluster.</p>
</li>
<li><p>Additionally, configure the hostname to your ingress IP.</p>
</li>
</ul>
<h2 id="heading-minikube">Minikube</h2>
<p><code>minikube</code> is an open-source utility that allows you to quickly deploy a local Kubernetes cluster on your personal computer. By using virtualization technologies, <code>minikube</code> creates a <strong>virtual machine (VM)</strong> that contains a single-node Kubernetes cluster. VMs are virtual computers and each VM is allocated its own system resources and operating system.</p>
<p>The latest <code>minikube</code> releases also allow you to create your cluster by using containers instead of virtual machines. Nevertheless, this solution is still not mature, and it is not supported for this course.</p>
<p>And, <code>minikube</code> is compatible with Linux, macOS, and Windows operating systems.</p>
<blockquote>
<p><strong>Note:</strong> Installing a local Kubernetes cluster needs administrative privileges in your system. If you do not have administrative privileges then you can’t use the Kubernetes cluster locally. In such a case, you can use a remote Kubernetes cluster. e.g. Redhat’s OpenShift</p>
</blockquote>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p>At least 2 GB of free memory</p>
</li>
<li><p>2 CPUs or more</p>
</li>
<li><p>At least 20 GB of free disk space</p>
</li>
<li><p>A locally installed hypervisor (using a container runtime is not supported in this course)</p>
</li>
</ul>
<p>Let’s install minikube first.</p>
<blockquote>
<p><strong>Note:</strong> You must install or enable <strong>hypervisor technology</strong> on your system before installing <code>minikube</code>. <strong>Hypervisor</strong> is software that is used to create and manage virtual machines on a shared physical hardware system.</p>
</blockquote>
<h3 id="heading-step-1-install-minikube">Step-1: Install minikube</h3>
<p>This tutorial is for Linux systems only, to set up on MacOS or Windows follow the official documentation: <a target="_blank" href="https://minikube.sigs.k8s.io/docs/start/">minikube installation</a>.</p>
<p>The preferred hypervisor for Linux systems is <a target="_blank" href="https://minikube.sigs.k8s.io/docs/drivers/kvm2/"><code>kvm2</code></a>. <code>minikube</code> communicates with the hypervisor using the <code>libvirt</code> virtualization API libraries.</p>
<blockquote>
<p><strong>Note:</strong> Prefix the following commands with <code>sudo</code> if you are running a user without administrative privileges.</p>
</blockquote>
<p>Install virtualization libraries (for other than Ubuntu use respective package manager and libraries — google it):</p>
<pre><code class="lang-bash"><span class="hljs-comment"># For Ubuntu/Debian</span>
$ sudo apt update
$ sudo apt -y install qemu-kvm libvirt-daemon bridge-utils virtinst libvirt-daemon-system
</code></pre>
<p>Start the <code>libvirtd</code> service:</p>
<pre><code class="lang-bash">$ systemctl start libvirtd
$ systemctl <span class="hljs-built_in">enable</span> libvirtd
</code></pre>
<p><strong>OR,</strong></p>
<p>You can use VirtualBox to use Minikube.</p>
<p>To install VirtualBox run the following command in Linux:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># for Arch Linux</span>
$ sudo pacman -S virtualbox 

<span class="hljs-comment"># for Ubuntu</span>
$ sudo apt-get update
$ sudo apt-get install virtualbox
$ sudo apt-get install virtualbox—ext–pack
</code></pre>
<p><strong>1.2) Install Minikube on Linux</strong></p>
<ul>
<li>If your system contains a package manager or software manager including <code>minikube</code>, then use it and verify the version matches the minimum requirements.</li>
</ul>
<pre><code class="lang-bash">$ sudo apt install minikube

<span class="hljs-comment"># For Arch Linux</span>
$ sudo pacman -S minikube
</code></pre>
<p><strong>1.3) Start Minikube Cluster on Linux</strong></p>
<p>To initialize your <code>minikube</code> cluster, use the <code>minikube start</code> command. (I’m using Arch Linux with <code>virtualbox</code> — you can use <code>kvm2</code>)</p>
<pre><code class="lang-bash">$ minikube start --driver=virtualbox minikube v1.33.1 on Arch <span class="hljs-string">"rolling"</span> ▪ KUBECONFIG=/home/sagar/.kube/confi
✨ Using the virtualbox driver based on existing profile
👍 Starting control plane node minikube <span class="hljs-keyword">in</span> cluster minikube
🔄 Restarting existing virtualbox VM <span class="hljs-keyword">for</span> <span class="hljs-string">"minikube"</span> ...
🎉 minikube 1.26.1 is available! Download it: 
💡 To <span class="hljs-built_in">disable</span> this notice, run: <span class="hljs-string">'minikube config set WantUpdateNotification false'</span> 🐳 Preparing Kubernetes v1.33.1 on Docker 24.0.5 ... ▪ kubelet.housekeeping-interval=5m ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 ▪ Using image kubernetesui/dashboard:v2.3.1 ▪ Using image kubernetesui/metrics-scraper:v1.0.7 ▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.1.1 ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
🔎 Verifying Kubernetes components...
🔎 Verifying ingress addon...
🌟 Enabled addons: storage-provisioner, default-storageclass, ingress, dashboard
🏄 Done! kubectl is now configured to use <span class="hljs-string">"minikube"</span> cluster and <span class="hljs-string">"default"</span> namespace by defaul
</code></pre>
<blockquote>
<p><strong>Note: To set the default driver, run the command</strong><code>minikube config set driver DRIVER</code><strong>.</strong></p>
</blockquote>
<h3 id="heading-step-2-verify-minikube-installation">Step-2: Verify Minikube installation</h3>
<p>Use the <code>minikube status</code> command to validate that the <code>minikube</code> installation is running successfully:</p>
<pre><code class="lang-bash">$ minikube status
minikube
<span class="hljs-built_in">type</span>: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
</code></pre>
<p>If any errors please ensure the driver is set up correctly or refer to <a target="_blank" href="https://minikube.sigs.k8s.io/docs/start/">Minikube Get Started documentation</a> for troubleshooting.</p>
<h3 id="heading-step-3-add-extensions-to-minikube">Step-3: Add extensions to Minikube</h3>
<p>To add more features to your <code>minikube</code>, it provides an add-on-based extension system so that one can add more features by installing the needed add-ons.</p>
<p>To view the list of add-ons available and installation status use the following command:</p>
<pre><code class="lang-bash">$ minikube addons list

minikube addons list
|-----------------------------|----------|--------------|--------------------------------|
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador                  | minikube | disabled     | third-party (ambassador)       |
| auto-pause                  | minikube | disabled     | google                         |
| csi-hostpath-driver         | minikube | disabled     | kubernetes                     |
| dashboard                   | minikube | disabled     | kubernetes                     |
| default-storageclass        | minikube | enabled ✅   | kubernetes                     |
| efk                         | minikube | disabled     | third-party (elastic)          |
| freshpod                    | minikube | disabled     | google                         |
| gcp-auth                    | minikube | disabled     | google                         |
| gvisor                      | minikube | disabled     | google                         |
| helm-tiller                 | minikube | disabled     | third-party (helm)             |
| ingress                     | minikube | disabled     | unknown (third-party)          |
| ingress-dns                 | minikube | disabled     | google                         |
| istio                       | minikube | disabled     | third-party (istio)            |
...
</code></pre>
<p>In my output, you can see I have only default add-ons (Right tick with enabled status). Other are available add-ons for your Minikube.</p>
<p>To enable add ons run the following command:</p>
<pre><code class="lang-bash">$ minikube addons <span class="hljs-built_in">enable</span> ingress

    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.1.1
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
🔎  Verifying ingress addon...
🌟  The <span class="hljs-string">'ingress'</span> addon is enabled
</code></pre>
<p>As in my Output, versions and docker images can vary in your case, but make sure the final validation is successful.</p>
<p>Also, let’s enable the <code>dashboard</code> add-on with the following command:</p>
<pre><code class="lang-bash">$ minikube addons <span class="hljs-built_in">enable</span> dashboard

    ▪ Using image kubernetesui/dashboard:v2.3.1
    ▪ Using image kubernetesui/metrics-scraper:v1.0.7
💡  Some dashboard features require the metrics-server addon. To <span class="hljs-built_in">enable</span> all features please run:
        minikube addons <span class="hljs-built_in">enable</span> metrics-server
🌟  The <span class="hljs-string">'dashboard'</span> addon is enabled
</code></pre>
<p>Now see the addons status by re-running the list command:</p>
<pre><code class="lang-bash">$ minikube addons list

|-----------------------------|----------|--------------|--------------------------------|
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador                  | minikube | disabled     | third-party (ambassador)       |
| auto-pause                  | minikube | disabled     | google                         |
| csi-hostpath-driver         | minikube | disabled     | kubernetes                     |
| dashboard                   | minikube | enabled ✅   | kubernetes                     |
| default-storageclass        | minikube | enabled ✅   | kubernetes                     |
| efk                         | minikube | disabled     | third-party (elastic)          |
| freshpod                    | minikube | disabled     | google                         |
| gcp-auth                    | minikube | disabled     | google                         |
| gvisor                      | minikube | disabled     | google                         |
| helm-tiller                 | minikube | disabled     | third-party (helm)             |
| ingress                     | minikube | enabled ✅   | unknown (third-party)          |
| ingress-dns                 | minikube | disabled     | google                         |
| istio                       | minikube | disabled     | third-party (istio)            |
| istio-provisioner           | minikube | disabled     | third-party (istio)            |
...
</code></pre>
<p>As you can see, from my output, <strong>dashboard</strong>, and <strong>ingress</strong> are enabled.</p>
<blockquote>
<p><strong>Note:</strong> Here, <code>Ingress</code> is installed so that an IP is assigned to the minikube using that we can expose our kubernets cluster to outside the cluster. i.e. access from outside K8s cluster.</p>
</blockquote>
<p>Once the dashboard is enabled you can open it by using the <code>minikube dashboard</code> command. This command will open the dashboard in your web browser.</p>
<pre><code class="lang-bash">$ minikube dashboard 🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
🎉 Opening  <span class="hljs-keyword">in</span> your default browser...
Gtk-Message: 14:38:42.184: Failed to load module <span class="hljs-string">"appmenu-gtk-module"</span>
Opening <span class="hljs-keyword">in</span> existing browser session.
</code></pre>
<p>Browser output:</p>
<p><img src="https://scanskill.com/wp-content/uploads/2022/08/Selection_424-1024x529.png" alt /></p>
<p>Press <code>Ctrl+C</code> in the terminal to stop the connection.</p>
<h3 id="heading-step-4-enable-external-access-to-minikube-ingress">Step-4: Enable external access to Minikube Ingress</h3>
<p>Now using the external Minikube IP and hostname associated with our Ingress we can access from outside the Kubernetes cluster.</p>
<p><strong>4.1) External access to Minikube</strong></p>
<p>Routing traffic from your local machine to your Minikube Kubernetes cluster requires two steps.</p>
<p>First, you must find the local IP assigned to your Ingress add-on. To find the ingress IP run the following command:</p>
<pre><code class="lang-bash">$ minikube ip

192.168.59.109
</code></pre>
<p>Your IP might be different as it depends on your virtual environment configuration.</p>
<p>Now you can access Kubernetes applications from outside the K8s cluster (from a local machine) using this IP.</p>
<h2 id="heading-additional-step">Additional Step</h2>
<p>In this step, you will be configuring the hostname to your Minikube ingress IP, so that you can use hostname instead of IP.</p>
<p>To do this follow the steps:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Linux or MacOS</span>
$ sudo nano /etc/hosts
</code></pre>
<p>For windows go to <code>C:\\Windows\\System32\\drivers\\etc\\hosts</code> and edit with administrative privileges.</p>
<pre><code class="lang-bash"> test.example.com
</code></pre>
<p>Replace with your minikub ingress IP. In my case hosts file look like this:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Static table lookup for hostnames.</span>
<span class="hljs-comment"># See hosts(5) for details.</span>
127.0.0.1       localhost
::1             localhost
127.0.1.1       arch_linux

<span class="hljs-comment"># Minikube ingress IP</span>
192.168.59.109  dev.sagar.com
</code></pre>
<p>Here, I have declared <code>dev.sagar.com</code> as the hostname for my kubernetes minikube ingress IP.</p>
<pre><code class="lang-plaintext"> test.example.com
</code></pre>
<p><strong>Note:</strong> If you delete and recreate your minikube cluster, then you have to update IP address assigned to the <strong>hosts</strong> file accordingly.</p>
<p>Now, you can access services in the cluster with the hostname and the relative path associated with the ingress. For example, if your application is mapped to the path <strong>/demoapp</strong> you can access it using the hostname <a target="_blank" href="http://test.example.com/demoapp."><strong><em>http://test.example.com/demoapp</em></strong>.</a> For my case <a target="_blank" href="http://dev.sagar.com/demoapp."><strong><em>http://dev.sagar.com/demoapp</em></strong>.</a></p>
<p>That’s it!</p>
<p>In this tutorial, we talked about how we can create a local Kubernetes cluster with Minikube and make it accessible from the outside of the Kubernetes cluster.</p>
<p>Thank you!</p>
]]></content:encoded></item><item><title><![CDATA[Let's Encrypt(Certbot) free SSL with Nginx/Apache configurations on ubuntu (24.04 | 22.04 | 20.04)]]></title><description><![CDATA[Certbot is part of EFF's effort to encrypt the entire internet. Anyone who has gone through the trouble of setting up a secure website knows what a hassle getting and maintaining a certificate is. Certbot and Let's Encrypt can automate away the pain ...]]></description><link>https://blog.budhathokisagar.com.np/letsfree-ssl-with-nginx-apache-on-ubuntu</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/letsfree-ssl-with-nginx-apache-on-ubuntu</guid><category><![CDATA[Ubuntu]]></category><category><![CDATA[SSL]]></category><category><![CDATA[nginx]]></category><category><![CDATA[apache]]></category><category><![CDATA[certbot]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Fri, 08 Jul 2022 07:44:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1657266201043/EHpSVngSo.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Certbot is part of EFF's effort to encrypt the entire internet. Anyone who has gone through the trouble of setting up a secure website knows what a hassle getting and maintaining a certificate is. Certbot and Let's Encrypt can automate away the pain and let you turn on and manage HTTPS with simple setup commands. It's totally free to use.</p>
<p>It's not required to use Let's Encrypt to obtain an SSL, you have the flexibility to use any Certificate Authority you choose.</p>
<p>This is the tutorial to help you to install the Let's Encrypt client on Ubuntu 20.04 Linux system.</p>
<h3 id="heading-prerequisites">Prerequisites:</h3>
<ul>
<li><p>A running Ubuntu system with non-root, sudo enabled user.</p>
</li>
<li><p>A fully registered domain name pointed to the ubuntu server.</p>
</li>
<li><p>Server running engine Nginx or apache. (We will use Nginx for this tutorial)</p>
</li>
<li><p>Port 80 or 443 must be unused on your server.</p>
</li>
</ul>
<p><strong>Note:</strong> <em>Installation method is the same for Apache too, only the plugins used are different.</em></p>
<h3 id="heading-installation">Installation:</h3>
<h4 id="heading-1-installing-certbot">1. Installing Certbot</h4>
<p>Snap package is the easiest way for installing certbot on the Ubuntu system. Snap packages work on nearly all Linux flavors, but they required that you've installed snapd first in order to manage snap packages. Actually, Certbot is a third-party service that makes it easier to install Let's Encrypt. First SSH to the server, update the repository server:</p>
<pre><code class="lang-bash">sudo apt update &amp;&amp; upgrade -y
</code></pre>
<p>After the system has been successfully updated and upgraded, download services or packages that support(is required) the running of Certbot Let's Encrypt.</p>
<pre><code class="lang-bash">sudo apt install certbot python3-certbot-nginx
</code></pre>
<p>Once done, confirm the Nginx Virtualhost configuration. The nginx virtualhost is the one that guarantees success in installing Let's Encrypt. And Certbot will check Nginx to generate SSL using Let's Encrypt.</p>
<h4 id="heading-2-nginx-virtualhost-configuration">2. Nginx Virtualhost configuration</h4>
<p>To create a Certbot SSL certificate, make sure the domain or subdomain is registered on the Virtualhost Nginx web server.</p>
<p>Open the file <strong><em>vim /etc/nginx/sites-available/your_domain.conf</em></strong> and edit <strong><em>server_name</em></strong> with your domain.</p>
<pre><code class="lang-bash">vim /etc/nginx/sites-available/your.domain.conf
</code></pre>
<pre><code class="lang-bash">...
...
server {
         listen 80 default_server;
         root /var/www/html;
         <span class="hljs-keyword">if</span> (<span class="hljs-variable">$http_user_agent</span> ~* LWP::Simple|BBBike|wget) {
         <span class="hljs-built_in">return</span> 403;
         }
         index index.html index.htm index.nginx-debian.html;
         server_name your.domain.com
         <span class="hljs-built_in">return</span> 404;
...
...
</code></pre>
<p>If <strong><em>server_name</em></strong> matches the target Let's Encrypt is going to register. Test the Nginx service.</p>
<h5 id="heading-nginx-testing">Nginx testing:</h5>
<p>After the configuration has been saved, use the following command to check the status:</p>
<pre><code class="lang-bash">nginx -t
</code></pre>
<p>On correct configuration output will be:</p>
<pre><code class="lang-bash">nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf <span class="hljs-built_in">test</span> is successful
</code></pre>
<p>Restart the Nginx service:</p>
<pre><code class="lang-bash">sudo systemctl restart nginx
</code></pre>
<h4 id="heading-3-allow-https">3. Allow HTTPS</h4>
<p>You have to open ports 80 and 443, namely HTTP and HTTPS to be able to enter and exit the server through the firewall.</p>
<p>Check the firewall status:</p>
<pre><code class="lang-bash">ufw status
</code></pre>
<p>If the firewall is Inactive, you can continue to the next step. But firewall turn on is recommended since it protects the server from external attacks.</p>
<p>Now, add permissions for ports 80 and 443 i.e HTTP and HTTPS:</p>
<pre><code class="lang-bash">ufw allow http
ufw allow https
ufw allow ssh
</code></pre>
<p>Then enable Firewall/UFW:</p>
<pre><code class="lang-bash">ufw <span class="hljs-built_in">enable</span>
</code></pre>
<p>check status:</p>
<pre><code class="lang-bash">ufw status
</code></pre>
<p>Output will be:</p>
<pre><code class="lang-bash">Status: active

To                         Action      From
--                         ------      ----
80/tcp                     ALLOW       Anywhere
443/tcp                    ALLOW       Anywhere
244                        ALLOW       Anywhere
80/tcp (v6)                ALLOW       Anywhere (v6)
443/tcp (v6)               ALLOW       Anywhere (v6)
244 (v6)
</code></pre>
<p>Finally, you can run Certbot and generate certificates.</p>
<h4 id="heading-4-generate-ssl">4. Generate SSL</h4>
<p>Since we're using Nginx plugin we can create certificate for DNS <strong><em>your.domain.com</em></strong> as:</p>
<pre><code class="lang-bash">certbot --nginx -d your.domain.com
</code></pre>
<p>Which will create certificate for the domain we are requesting, answer some questions for SSL. (email, agree terms, etc.) After that Let's Encrypt SSL certificate will be generated in <strong><em>/etc/nginx/sites-available/</em></strong> directory for your domain.</p>
<pre><code class="lang-bash">Output

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this <span class="hljs-keyword">for</span>
new sites, or <span class="hljs-keyword">if</span> you<span class="hljs-string">'re confident your site works on HTTPS. You can undo this
change by editing your web server'</span>s configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] <span class="hljs-keyword">then</span> [enter] (press <span class="hljs-string">'c'</span> to cancel):
</code></pre>
<p>The certificate is only valid for 90 days, we must renew the certificate every time it expires. Good thing is Certbot that has been installed already provides a service for updating scrips to cron-job (<strong><em>/etc/cron.d/</em></strong>).</p>
<pre><code class="lang-bash">systemctl status certbot.timer
</code></pre>
<p>Output will be:</p>
<pre><code class="lang-bash">● certbot.timer - Run certbot twice daily
   Loaded: loaded (/lib/systemd/system/certbot.timer; enabled; vendor preset: enabled)
   Active: active (waiting) since Thu 2021-12-23 00:56:59 UTC; 1 months 17 days ago
  Trigger: Wed 2022-02-09 23:47:10 UTC; 18h left
</code></pre>
<p>This command will run twice a day and will renew every 30 days from the expiration date.</p>
<p>Test the update and ensure the renewal process works:</p>
<pre><code class="lang-bash">certbot renew --dry-run
</code></pre>
<pre><code class="lang-bash">Saving debug <span class="hljs-built_in">log</span> to /var/<span class="hljs-built_in">log</span>/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/your.domain.com.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert not due <span class="hljs-keyword">for</span> renewal, but simulating renewal <span class="hljs-keyword">for</span> dry run
Plugins selected: Authenticator apache, Installer apache
Renewing an existing certificate
Performing the following challenges:
http-01 challenge <span class="hljs-keyword">for</span> your.domain.com
Waiting <span class="hljs-keyword">for</span> verification...
Cleaning up challenges

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
new certificate deployed with reload of apache server; fullchain is
/etc/letsencrypt/live/your.domain.com/fullchain.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/your.domain.com.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert not due <span class="hljs-keyword">for</span> renewal, but simulating renewal <span class="hljs-keyword">for</span> dry run
Plugins selected: Authenticator apache, Installer apache
Renewing an existing certificate
Performing the following challenges:
http-01 challenge <span class="hljs-keyword">for</span> your.domain.com
Waiting <span class="hljs-keyword">for</span> verification...
Cleaning up challenges
</code></pre>
<p>That's it.</p>
<p>If the automatic renewal fails, Certbot sends an error message to the email that was registered at the time of generating the certificate.</p>
<p>Explore more DevOps blogs of mine: <a target="_blank" href="https://scanskill.com/profile/sagar">https://scanskill.com/profile/sagar</a></p>
<p>Thank you!</p>
]]></content:encoded></item><item><title><![CDATA[Setup On-premise GitLab Server, Runner, CI/CD, and Nginx Configuration]]></title><description><![CDATA[In this article, I’m gonna walk you through configuring on-premise GitLab Server, GitLab Runner, and CI/CD for containerized microservices with NGINX configuration as a load balancer. We’ll deploy and run all the containerized microservices on the sa...]]></description><link>https://blog.budhathokisagar.com.np/setup-on-premise-gitlab-server-runner-cicd-and-nginx-configuration</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/setup-on-premise-gitlab-server-runner-cicd-and-nginx-configuration</guid><category><![CDATA[BlogsWithCC]]></category><category><![CDATA[GitLab]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[gitlab-runner]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Fri, 08 Jul 2022 06:46:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1657264087154/oXndY1eb4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, I’m gonna walk you through configuring <strong>on-premise GitLab Server, GitLab Runner, and CI/CD for containerized microservices with NGINX configuration as a load balancer</strong>. We’ll deploy and run all the containerized microservices on the same machine where GitLab and Runner are set up.</p>
<h1 id="heading-gitlab-server"><strong>GitLab Server</strong></h1>
<p>GitLab allows you to host an on-premise GitLab server(or Git repository) that can be accessed from LAN(or WAN if you have a public IP address).</p>
<p>We can install the GitLab server either on a container environment like docker or on the host machine. In this, I’ll install it on the host machine (i.e. Ubuntu 20.04 machine).</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li>Machine with at least 2 cores and 4GB of memory.</li>
</ul>
<h2 id="heading-on-premise-gitlab-server">On-Premise GitLab Server</h2>
<h3 id="heading-on-premise-installation">On-Premise Installation</h3>
<h4 id="heading-step-1-update-system-and-install-required-dependencies"><strong>Step-1: Update System and Install Required Dependencies</strong></h4>
<p>Open the terminal on the server(machine) and run the following commands:</p>
<pre><code>$ sudo apt update
$ sudo apt upgrade -y
</code></pre><p>After running these, let’s install some dependencies by running:</p>
<pre><code>$ sudo apt install <span class="hljs-operator">-</span>y ca<span class="hljs-operator">-</span>certificates curl openssh<span class="hljs-operator">-</span>server tzdata perl
</code></pre><h4 id="heading-step-2-install-postfix-for-email-notifications-optional"><strong>Step-2: Install Postfix for Email Notifications (Optional)</strong></h4>
<p>Optionally, if you want to use the same system or server to send email notifications to users, then you can install postfix, an open-source mail transfer agent.</p>
<pre><code>$ sudo apt install postfix -y
</code></pre><p>While installing Postfix you’ll be asked to set it up, select ‘Internet Site’ option and add your DNS for mail name and configure other required things.</p>
<p>Further, you’ll need to install the mailutils package:</p>
<pre><code>$ sudo apt install mailutils
</code></pre><p>OR, you can set up SMTP server to send email notifications instead of Postfix. To do this, you need to first install GitLab and edit /etc/gitlab/gitlab.rb.</p>
<h4 id="heading-step-3-add-gitlab-package-repository-and-gpg-key"><strong>Step-3: Add GitLab Package Repository and GPG Key</strong></h4>
<p>As the GitLab is not available on the base repository of Ubuntu. You need to add the GitLab package repository and GPG key by running the following command:</p>
<pre><code>$ curl -sS &lt;<span class="hljs-symbol">https:</span>/<span class="hljs-regexp">/packages.gitlab.com/install</span><span class="hljs-regexp">/repositories/gitlab</span><span class="hljs-regexp">/gitlab-ee/script</span>.deb.sh&gt; <span class="hljs-params">| sudo bash</span>
</code></pre><p>Here, I’ve added a package repository for GitLab Enterprise Edition, you can also add for GitLab Community Edition.</p>
<p>After adding the repository package, you can see the repository contents in:</p>
<pre><code>$ cat <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>apt<span class="hljs-operator">/</span>sources.list.d/gitlab_gitlab<span class="hljs-operator">-</span>ce.list
</code></pre><h4 id="heading-step-4-install-gitlab-ee-on-ubuntu-2204-or-2004-or-1804"><strong>Step-4: Install GitLab EE on Ubuntu (22.04 | 20.04 | 18.04)</strong></h4>
<p>Now update the system again and install GitLab EE on the machine:</p>
<pre><code>$ sudp apt update
$ sudp apt install gitlab-ee
</code></pre><p>You’ll see output similar:</p>
<pre><code>It looks like GitLab has not been configured yet; skipping the upgrade script.

       *.                  *.
      *<span class="hljs-operator">*</span><span class="hljs-operator">*</span>                 <span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>
     <span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>               <span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>
    .*<span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>             <span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>
    <span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>            <span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>
   ,,,,,,,,,<span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>,,,,,,,,,
  ,,,,,,,,,,,<span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>,,,,,,,,,,,
  .,,,,,,,,,,,<span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>,,,,,,,,,,,,
      ,,,,,,,,,<span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>,,,,,,,,,.
         ,,,,,,,<span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>,,,,,,
            .,,,<span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>,,,,
                ,<span class="hljs-operator">*</span>,.


     _______ __  __          __
    <span class="hljs-operator">/</span> ____(<span class="hljs-keyword">_</span>) <span class="hljs-operator">/</span><span class="hljs-keyword">_</span><span class="hljs-operator">/</span> <span class="hljs-operator">/</span>   ____ <span class="hljs-keyword">_</span><span class="hljs-operator">/</span> <span class="hljs-operator">/</span><span class="hljs-keyword">_</span>
   <span class="hljs-operator">/</span> <span class="hljs-operator">/</span> __<span class="hljs-operator">/</span> <span class="hljs-operator">/</span> __<span class="hljs-operator">/</span> <span class="hljs-operator">/</span>   <span class="hljs-operator">/</span> __ `<span class="hljs-operator">/</span> __ \\
  <span class="hljs-operator">/</span> <span class="hljs-operator">/</span><span class="hljs-keyword">_</span><span class="hljs-operator">/</span> <span class="hljs-operator">/</span> <span class="hljs-operator">/</span> <span class="hljs-operator">/</span><span class="hljs-keyword">_</span><span class="hljs-operator">/</span> <span class="hljs-operator">/</span>___<span class="hljs-operator">/</span> <span class="hljs-operator">/</span><span class="hljs-keyword">_</span><span class="hljs-operator">/</span> <span class="hljs-operator">/</span> <span class="hljs-operator">/</span><span class="hljs-keyword">_</span><span class="hljs-operator">/</span> <span class="hljs-operator">/</span>
  \\____<span class="hljs-operator">/</span><span class="hljs-keyword">_</span><span class="hljs-operator">/</span>\\__<span class="hljs-operator">/</span>_____<span class="hljs-operator">/</span>\\__,<span class="hljs-keyword">_</span><span class="hljs-operator">/</span><span class="hljs-keyword">_</span>.___/


Thank you <span class="hljs-keyword">for</span> installing GitLab<span class="hljs-operator">!</span>
</code></pre><p>Now edit <strong><em>external_url</em></strong> in <em>/etc/gitlab/gitlab.rb</em> to set <strong><em>hostname</em></strong>. You can also configure other parameters. Replace <em>gitlab.example.com</em> with valid domain name.</p>
<pre><code>$ sudo nano <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>gitlab<span class="hljs-operator">/</span>gitlab.rb
external_url <span class="hljs-string">"&lt;http://gitlab.example.com&gt;"</span>
</code></pre><p>OR, If you’re not going to use DNS. You can simply use your server’s IP address like in my case my local machine’s IP as:</p>
<pre><code><span class="hljs-attribute">external_url</span> <span class="hljs-string">"&lt;http://10.10.5.55&gt;"</span>
</code></pre><p>When done start GitLab by running the following command:</p>
<pre><code>$ sudo gitlab-ctl reconfigure
</code></pre><p>After successful reconfiguration, check the status of GitLab:</p>
<pre><code>$ sudo gitlab-ctl status
</code></pre><p>The output will be similar:</p>
<pre><code><span class="hljs-attribute">run</span>: <span class="hljs-attribute">alertmanager</span>: (pid <span class="hljs-number">92581</span>) <span class="hljs-number">18s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">92343</span>) <span class="hljs-number">80s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">gitaly</span>: (pid <span class="hljs-number">92590</span>) <span class="hljs-number">18s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">91561</span>) <span class="hljs-number">189s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">gitlab-exporter</span>: (pid <span class="hljs-number">92551</span>) <span class="hljs-number">20s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">92078</span>) <span class="hljs-number">98s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">gitlab-kas</span>: (pid <span class="hljs-number">92520</span>) <span class="hljs-number">22s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">91845</span>) <span class="hljs-number">175s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">gitlab-workhorse</span>: (pid <span class="hljs-number">92531</span>) <span class="hljs-number">21s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">91985</span>) <span class="hljs-number">117s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">grafana</span>: (pid <span class="hljs-number">92610</span>) <span class="hljs-number">17s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">92471</span>) <span class="hljs-number">38s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">logrotate</span>: (pid <span class="hljs-number">91486</span>) <span class="hljs-number">202s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">91494</span>) <span class="hljs-number">201s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">nginx</span>: (pid <span class="hljs-number">91993</span>) <span class="hljs-number">114s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">92013</span>) <span class="hljs-number">110s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">node-exporter</span>: (pid <span class="hljs-number">92540</span>) <span class="hljs-number">21s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">92049</span>) <span class="hljs-number">104s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">postgres-exporter</span>: (pid <span class="hljs-number">92601</span>) <span class="hljs-number">18s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">92367</span>) <span class="hljs-number">76s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">postgresql</span>: (pid <span class="hljs-number">91693</span>) <span class="hljs-number">184s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">91704</span>) <span class="hljs-number">183s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">prometheus</span>: (pid <span class="hljs-number">92560</span>) <span class="hljs-number">20s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">92297</span>) <span class="hljs-number">88s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">puma</span>: (pid <span class="hljs-number">91904</span>) <span class="hljs-number">132s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">91917</span>) <span class="hljs-number">129s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">redis</span>: (pid <span class="hljs-number">91521</span>) <span class="hljs-number">196s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">91538</span>) <span class="hljs-number">193s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">redis-exporter</span>: (pid <span class="hljs-number">92553</span>) <span class="hljs-number">20s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">92217</span>) <span class="hljs-number">94s</span>
<span class="hljs-attribute">run</span>: <span class="hljs-attribute">sidekiq</span>: (pid <span class="hljs-number">91922</span>) <span class="hljs-number">126s</span>; <span class="hljs-attribute">run</span>: <span class="hljs-attribute">log</span>: (pid <span class="hljs-number">91934</span>) <span class="hljs-number">122s</span>
</code></pre><h4 id="heading-step-5-gitlab-web-interface"><strong>Step-5: GitLab Web Interface</strong></h4>
<p>Once the installation completes, open your URL set up as external_url <strong><em>http://gitlab.example.com</em></strong> on your browser. In my case, I have set it up locally using my server’s IP address. So, I can access it through <strong><em>http://10.10.5.55/</em></strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657261163387/bMT6MHjJA.png" alt="Selection_345.png" /></p>
<p>Now, first, you can log in as a root user using username as <strong><em>root</em></strong> and password from <strong><em>/etc/gitlab/intial_root_password</em></strong>. (The password for root user is randomly generated and stored for 24 hours only.)</p>
<p>To check the password run the following:</p>
<pre><code>$ cat <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>gitlab<span class="hljs-operator">/</span>initial_root_password
</code></pre><pre><code><span class="hljs-comment"># WARNING: This value is valid only in the following conditions</span>
<span class="hljs-comment">#          1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).</span>
<span class="hljs-comment">#          2. Password hasn't been changed manually, either via UI or via command line.</span>
<span class="hljs-comment">#</span>
<span class="hljs-comment">#          If the password shown here doesn't work, you must reset the admin password following &lt;https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password&gt;.</span>

<span class="hljs-attribute">Password</span>: dfdfkOtOjWp<span class="hljs-number">7</span>v<span class="hljs-number">70</span>OjkjtadsdfsafsadnSJAhcDbCNo<span class="hljs-number">9</span>nTNGVC<span class="hljs-number">5</span>UoSCyE=

<span class="hljs-comment"># <span class="hljs-doctag">NOTE:</span> This file will be automatically deleted in the first reconfigure run after 24 hours.</span>
<span class="hljs-attribute">Copy</span> the password and login:
</code></pre><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657261190786/QqJUk4m1n.png" alt="Selection_346.png" /></p>
<p>After logging in, first reset the root user password. To do this, go to <strong>root user profile</strong> &gt; <strong>Preferences</strong> &gt; <strong>Password</strong>. Change.</p>
<p>You can start, stop or restart all the GitLab components by using the following commands:</p>
<pre><code>$ sudo gitlab-ctl start
$ sudo gitlab-ctl restart
$ sudo gitlab-ctl stop
</code></pre><p>Also, you can start individual components also:</p>
<pre><code>$ sudo gitlab-ctl <span class="hljs-keyword">restart</span> prometheus
</code></pre><pre><code><span class="hljs-attribute">ok</span>: <span class="hljs-attribute">run</span>: <span class="hljs-attribute">prometheus</span>: (pid <span class="hljs-number">2673</span>) <span class="hljs-number">8s</span>
</code></pre><p>This is all you need to set up GitLab Server on Ubuntu Server.</p>
<h1 id="heading-gitlab-runner"><strong>GitLab Runner</strong></h1>
<p>GitLab Runner can be installed:</p>
<ul>
<li>In a container(Docker, K8s, or OpenShift)</li>
<li>By downloading binary manually</li>
<li>By using repository packages</li>
</ul>
<p><strong><em>In this, we’ll be installing Runner on the local machine as I have to deploy microservices on the same machine where GitLab and Runner will be running.</em></strong></p>
<h3 id="heading-gitlab-runner-installation">GitLab Runner – Installation</h3>
<p>If you’re using a different server for a runner than that GitLab is configured, you need to first ssh into the server and follow the following steps. Otherwise, you can directly follow from step-1.</p>
<h4 id="heading-step-1-add-official-repository"><strong>Step-1: Add Official Repository</strong></h4>
<p>First, add the official GitLab Runner Repository using the below command. You can find the Official Repository here.</p>
<pre><code>$ curl <span class="hljs-operator">-</span>L <span class="hljs-string">"https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh"</span> <span class="hljs-operator">|</span> sudo bash
</code></pre><h4 id="heading-step-2-install-runner-on-ubuntu-2204-or-2004-or-1804"><strong>Step-2: Install Runner on Ubuntu (22.04 | 20.04 | 18.04)</strong></h4>
<p>Run the following command to install GitLab Runner:</p>
<pre><code>$ sudo apt update
$ sudo apt install gitlab-runner
</code></pre><p>(Optional) To install a specific version:</p>
<pre><code>$ sudo apt install gitlab<span class="hljs-operator">-</span>runner<span class="hljs-operator">=</span><span class="hljs-number">10.0</span><span class="hljs-number">.0</span>
</code></pre><p>After Installation, check the GitLab Runner version:</p>
<pre><code>$ sudo gitlab<span class="hljs-operator">-</span>runner <span class="hljs-operator">-</span><span class="hljs-operator">-</span>version
</code></pre><p>Output:</p>
<pre><code><span class="hljs-attribute">Version</span>:      <span class="hljs-number">15</span>.<span class="hljs-number">0</span>.<span class="hljs-number">0</span>
<span class="hljs-attribute">Git</span> revision: febb<span class="hljs-number">2</span>a<span class="hljs-number">09</span>
<span class="hljs-attribute">Git</span> branch:   <span class="hljs-number">15</span>-<span class="hljs-number">0</span>-stable
<span class="hljs-attribute">GO</span> version:   go<span class="hljs-number">1</span>.<span class="hljs-number">17</span>.<span class="hljs-number">7</span>
<span class="hljs-attribute">Built</span>:        <span class="hljs-number">2022</span>-<span class="hljs-number">05</span>-<span class="hljs-number">19</span>T<span class="hljs-number">20</span>:<span class="hljs-number">03</span>:<span class="hljs-number">43</span>+<span class="hljs-number">0000</span>
<span class="hljs-attribute">OS</span>/Arch:      linux/amd<span class="hljs-number">64</span>
</code></pre><p>To check the status:</p>
<pre><code>$ sudo gitlab-runner status
</code></pre><p>Output:</p>
<pre><code>Runtime platform                                    arch<span class="hljs-operator">=</span>amd64 os<span class="hljs-operator">=</span>linux pid<span class="hljs-operator">=</span><span class="hljs-number">29368</span> revision<span class="hljs-operator">=</span>febb2a09 version<span class="hljs-operator">=</span><span class="hljs-number">15.0</span><span class="hljs-number">.0</span>
gitlab<span class="hljs-operator">-</span>runner: Service <span class="hljs-keyword">is</span> running<span class="hljs-operator">!</span>
</code></pre><p>You can Start, Stop, and Restart GitLab Runner by running the following:</p>
<pre><code>$ sudo gitlab-runner start
$ sudo gitlab-runner stop
$ sudo gitlab-runner restart
</code></pre><h4 id="heading-step-3-grant-permission-to-gitlab-runner-user"><strong>Step-3: Grant Permission to GitLab Runner User</strong></h4>
<p>After successful installation, you’ll see a <strong><em>gitlab-runner</em></strong> user in <strong><em>/home</em></strong> directory. And you need to grant the sudo permission to the gitlab-runner user. For this, open <strong><em>visudo</em></strong> file:</p>
<pre><code>$ sudo visudo
</code></pre><p>Add the following in <strong>sudoers</strong> group and set <strong>NOPASSWD</strong> also as shown below:</p>
<pre><code>gitlab-runner <span class="hljs-keyword">ALL</span>=(<span class="hljs-keyword">ALL</span>:<span class="hljs-keyword">ALL</span>) <span class="hljs-keyword">ALL</span>

gitlab-runner <span class="hljs-keyword">ALL</span>=(<span class="hljs-keyword">ALL</span>) NOPASSWD: <span class="hljs-keyword">ALL</span>
</code></pre><p>Output:</p>
<pre><code>GNU nano <span class="hljs-number">4.8</span>                                /etc/sudoers.tmp
<span class="hljs-comment">#</span>
<span class="hljs-comment"># This file MUST be edited with the 'visudo' command as root.</span>
<span class="hljs-comment">#</span>
<span class="hljs-comment"># Please consider adding local content in /etc/sudoers.d/ instead of</span>
<span class="hljs-comment"># directly modifying this file.</span>
<span class="hljs-comment">#</span>
<span class="hljs-comment"># See the man page for details on how to write a sudoers file.</span>
<span class="hljs-comment">#</span>
Defaults        env_reset
Defaults        mail_badpass
Defaults        secure_path=<span class="hljs-string">"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"</span>
<span class="hljs-comment"># Host alias specification</span>
<span class="hljs-comment"># User alias specification</span>
<span class="hljs-comment"># Cmnd alias specification</span>
<span class="hljs-comment"># User privilege specification</span>
root    ALL=(ALL:ALL) ALL
gitlab-runner ALL=(ALL:ALL) ALL
<span class="hljs-comment"># Members of the admin group may gain root privileges</span>
%admin ALL=(ALL) ALL
<span class="hljs-comment"># Allow members of group sudo to execute any command</span>
%sudo   ALL=(ALL:ALL) ALL
<span class="hljs-comment"># See sudoers(5) for more information on "#include" directives:</span>
<span class="hljs-comment">#includedir /etc/sudoers.d</span>
gitlab-runner ALL=(ALL) NOPASSWD: ALL
</code></pre><h4 id="heading-step-4-register-gitlab-runner"><strong>Step-4: Register GitLab Runner</strong></h4>
<p>Now, it’s time to register GitLab Runner to GitLab Server which was set up before.</p>
<ul>
<li>Log in to GitLab Server with <strong><em>username</em></strong> and <strong><em>password</em></strong>(create a user if you haven’t already and create a project).</li>
<li>Navigate to <strong>Settings</strong> and click on <strong>CI/CD</strong> and click on <strong>Expand</strong> of Runners section.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657261584493/-y_5_bqDv.png" alt="Selection_349.png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657261603103/CSriD-qe7.png" alt="Selection_350.png" /></p>
<ul>
<li>Copy the URL and registration token. And register the runner.<pre><code>$ gitlab<span class="hljs-operator">-</span>runner register <span class="hljs-operator">-</span><span class="hljs-operator">-</span>name project<span class="hljs-operator">-</span>name<span class="hljs-operator">-</span>runner <span class="hljs-operator">-</span><span class="hljs-operator">-</span>url http:<span class="hljs-comment">//10.10.5.55/ --registration-token  GR1348941Ah6PxTBi5S5asJ7fzYSa</span>
</code></pre>OR,</li>
</ul>
<pre><code>$ sudo gitlab-runner <span class="hljs-keyword">register</span>
</code></pre><p><strong><em>Note: During registration, you’ll be asked some questions such as tags, name, executor etc., Select executor as shell (If you’re using docker you can select docker or docker+machine as an executor). Since I’m deploying on the same machine I will select executor as shell. Or you can select whatever as per your requirements.</em></strong></p>
<p>Now you have successfully performed GitLab Runner Registration.
Check on the GitLab CI/CD &gt; Runners section, you’ll see the newly registered runner.</p>
<h4 id="heading-error-this-job-is-stuck-because-the-project-doesnt-have-any-runners-online-assigned-to-it-go-to-runners-page">Error: This job is stuck because the project doesn’t have any runners online assigned to it. Go to Runners page.</h4>
<p>It’s due to tags you’ve added during configuration, but not added to your CI/CD job. To solve this, you can add tags to your CICD jobs Or do the following:</p>
<ul>
<li>Go to <strong>Settings</strong> &gt; <strong>CI/CD</strong> &gt; <strong>Runners</strong> (Click on Expand)</li>
<li>Click on the edit icon of the newly created runner. And check <strong>Run untagged jobs box</strong>.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657261697632/koO1lg-GF.png" alt="Selection_351.png" />
And here it is, you have successfully installed and registered gitlab-runner for your projects.</p>
<h3 id="heading-uninstalling-gitlab-runner"><strong>Uninstalling GitLab Runner</strong></h3>
<p>To completely remove gitlab-runner run the following commands:</p>
<pre><code>$ sudo apt purge <span class="hljs-operator">-</span><span class="hljs-operator">-</span>autoremove <span class="hljs-operator">-</span>y gitlab<span class="hljs-operator">-</span>runner
$ sudo apt<span class="hljs-operator">-</span>key del 51312f3f
$ sudo rm <span class="hljs-operator">-</span>rf <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>apt<span class="hljs-operator">/</span>sources.list.d/runner_gitlab<span class="hljs-operator">-</span>runner.list
$ sudo deluser <span class="hljs-operator">-</span><span class="hljs-operator">-</span>remove<span class="hljs-operator">-</span>home gitlab<span class="hljs-operator">-</span>runner
$ sudo rm <span class="hljs-operator">-</span>rf <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>gitlab<span class="hljs-operator">-</span>runner
</code></pre><h1 id="heading-cicd-for-containerized-applications">CI/CD for Containerized Applications</h1>
<p>My project structure:</p>
<pre><code>.
├── Dockerfile
├── .env
├── .git
├── .gitignore
├── .gitlab-ci.yml
├── package.json
├── README.md
├── .sample.env
└── src
</code></pre><h4 id="heading-step-1-configure-gitlab-ciyml-file"><strong>Step-1: Configure .gitlab-ci.yml File</strong></h4>
<p>Now it’s time to configure CI/CD for containerized applications to deploy on the same machine where the GitLab Runner is set up. For this, let’s configure <strong><em>.gitlab-ci.yml</em></strong> file as:</p>
<pre><code><span class="hljs-attr">stages:</span>   
  <span class="hljs-bullet">-</span> <span class="hljs-string">build</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">deploy</span>

<span class="hljs-attr">before_script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">IMAGE_TAG="$(echo</span> <span class="hljs-string">$CI_COMMIT_SHA</span> <span class="hljs-string">|</span> <span class="hljs-string">head</span> <span class="hljs-string">-c</span> <span class="hljs-number">8</span><span class="hljs-string">)"</span>

<span class="hljs-attr">dev-build-job:</span>
  <span class="hljs-attr">stage:</span> <span class="hljs-string">build</span>
  <span class="hljs-attr">script:</span>
    <span class="hljs-comment"># - export DOCKER_HOST=tcp://0.0.0.0:2375/</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">cp</span> <span class="hljs-string">$ENV_FILE</span> <span class="hljs-string">.env</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">-t</span> <span class="hljs-string">$APP_NAME:latest</span> <span class="hljs-string">.</span>
  <span class="hljs-attr">only:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">dev</span>
  <span class="hljs-attr">tags:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">cloudyfox</span>

<span class="hljs-attr">dev-deploy-job:</span>
  <span class="hljs-attr">stage:</span> <span class="hljs-string">deploy</span>
  <span class="hljs-attr">script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">container</span> <span class="hljs-string">rm</span> <span class="hljs-string">-f</span> <span class="hljs-string">$APP_NAME</span> <span class="hljs-string">||</span> <span class="hljs-literal">true</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">run</span> <span class="hljs-string">-d</span> <span class="hljs-string">-p</span> <span class="hljs-string">$PORT:$PORT</span> <span class="hljs-string">--name</span> <span class="hljs-string">$APP_NAME</span> <span class="hljs-string">$APP_NAME:latest</span>
  <span class="hljs-attr">only:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">dev</span>
  <span class="hljs-attr">tags:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">cloudyfox</span>
</code></pre><h4 id="heading-step-2-add-required-variables-and-deploy"><strong>Step-2: Add Required Variables and Deploy</strong></h4>
<p>Here, the pipeline has two stages: <strong>build</strong> and <strong>deploy</strong>. <strong>IMAGE_TAG</strong> variable is used to tag docker images later.</p>
<p>In the build stage, <strong>$ENV_FILE</strong> variable is copied from the GitLab variables to the project root directory as it is in the .gitignore file. So, create <strong>ENV_FILE</strong> File variable with the value of .env file. To do so, Go to GitLab Server and navigate to <strong>Settings</strong> &gt; <strong>CI/CD</strong> &gt; <strong>Variables</strong> (Click on Expand). And add a new File variable.</p>
<p>Also, I have added <strong>APP_NAME</strong> and <strong>PORT</strong> variables as they are required for me.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657261827234/hi8C3MP9o.png" alt="Selection_354.png" /></p>
<p>In the deploy stage, the docker built in the previous build stage is deployed by deleting a container of the same name if running which is deployed then deploys our docker container.</p>
<p>In this way, you can deploy all the containerized microservices to the machine. Whenever you push the changes to the remote repository, a new docker version will be built and run on the machine(where all GitLab, and Runner are configured.).</p>
<h1 id="heading-configuring-nginx-as-a-load-balancer">Configuring Nginx As a Load Balancer</h1>
<p>Let’s say you have successfully run all the micro applications using different ports on the server. Then you have to set up the Nginx load balancer to handle the requests. So that you can handle it in accordance with the users’ requests.</p>
<p>Since my GitLab server is running in the same local host, I need to use a different port to serve microservices. So, let’s start by configuring NGINX on the server.</p>
<h4 id="heading-step-1-install-nginx"><strong>Step-1: Install NGINX</strong></h4>
<pre><code>$ sudo apt <span class="hljs-keyword">update</span>
$ sudo -H apt <span class="hljs-keyword">install</span> nginx-common nginx-<span class="hljs-keyword">full</span>
</code></pre><p>Start Nginx:</p>
<pre><code>$ sudo nginx
</code></pre><p>And start Nginx then check the status:</p>
<pre><code>$ sudo systemctl start nginx
$ sudo systemctl status nginx
</code></pre><p>Output:</p>
<pre><code>● nginx.service <span class="hljs-operator">-</span> A high performance web server and a reverse proxy server
     Loaded: loaded (<span class="hljs-operator">/</span>lib<span class="hljs-operator">/</span>systemd<span class="hljs-operator">/</span>system<span class="hljs-operator">/</span>nginx.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed <span class="hljs-number">2022</span><span class="hljs-operator">-</span>06<span class="hljs-number">-15</span> 09:05:01 <span class="hljs-operator">+</span>0545; 5h 20min ago
       Docs: man:nginx(<span class="hljs-number">8</span>)
    Process: <span class="hljs-number">875</span> ExecStartPre<span class="hljs-operator">=</span><span class="hljs-operator">/</span>usr<span class="hljs-operator">/</span>sbin<span class="hljs-operator">/</span>nginx <span class="hljs-operator">-</span>t <span class="hljs-operator">-</span>q <span class="hljs-operator">-</span>g daemon on; master_process on; (code<span class="hljs-operator">=</span>exited, stat<span class="hljs-operator">&gt;</span>
    Process: <span class="hljs-number">1065</span> ExecStart<span class="hljs-operator">=</span><span class="hljs-operator">/</span>usr<span class="hljs-operator">/</span>sbin<span class="hljs-operator">/</span>nginx <span class="hljs-operator">-</span>g daemon on; master_process on; (code<span class="hljs-operator">=</span>exited, status<span class="hljs-operator">=</span><span class="hljs-number">0</span><span class="hljs-operator">/</span>SUC<span class="hljs-operator">&gt;</span>
   Main PID: <span class="hljs-number">1066</span> (nginx)
      Tasks: <span class="hljs-number">5</span> (limit: <span class="hljs-number">9408</span>)
     Memory: <span class="hljs-number">4</span>.7M
     CGroup: <span class="hljs-operator">/</span>system.slice/nginx.service
             ├─<span class="hljs-number">1066</span> nginx: master process <span class="hljs-operator">/</span>usr<span class="hljs-operator">/</span>sbin<span class="hljs-operator">/</span>nginx <span class="hljs-operator">-</span>g daemon on; master_process on;
             ├─<span class="hljs-number">1067</span> nginx: worker process
             ├─<span class="hljs-number">1068</span> nginx: worker process
             ├─<span class="hljs-number">1069</span> nginx: worker process
             └─<span class="hljs-number">1070</span> nginx: worker process
</code></pre><p>You can Start, Stop, and Restart Nginx as:</p>
<pre><code>$ sudo systemctl start nginx
$ sudo systemctl stop nginx
$ sudo systemctl restart nginx
</code></pre><h4 id="heading-step-2-configure-nginx-load-balancer-for-dockerized-apps"><strong>Step-2: Configure Nginx Load Balancer for Dockerized Apps</strong></h4>
<p>To configure Nginx as Loadbalancer, first, remove default conf file, /<strong><em>etc/nginx/sites-enabled/default</em></strong> and create <strong><em>/etc/nginx/conf.d/lb.conf</em></strong> with following content(change as your requirements)</p>
<p>I’ll be routing all microservices which are running in different ports to <strong><em>http://10.10.5.55:9999</em></strong>. (as <strong><em>http://10.10.5.55</em></strong> (port 80) is already used by the GitLab server)</p>
<p>The requests for different services will be load balanced as:</p>
<pre><code><span class="hljs-attribute">upstream</span> user {
    <span class="hljs-attribute">server</span> <span class="hljs-number">127.0.0.1:5001</span>;
}
<span class="hljs-attribute">upstream</span> content {
    <span class="hljs-attribute">server</span> <span class="hljs-number">127.0.0.1:5002</span>;
}
<span class="hljs-attribute">upstream</span> discussion {
    <span class="hljs-attribute">server</span> <span class="hljs-number">127.0.0.1:5003</span>;
}
<span class="hljs-attribute">upstream</span> payment {
    <span class="hljs-attribute">server</span> <span class="hljs-number">127.0.0.1:5004</span>;
}
<span class="hljs-attribute">upstream</span> read {
    <span class="hljs-attribute">server</span> <span class="hljs-number">127.0.0.1:5007</span>;
}
<span class="hljs-attribute">upstream</span> subscription {
    <span class="hljs-attribute">server</span> <span class="hljs-number">127.0.0.1:5005</span>;
}
<span class="hljs-attribute">upstream</span> card {
    <span class="hljs-attribute">server</span> <span class="hljs-number">127.0.0.1:5006</span>;
}
<span class="hljs-section">server</span> {
    <span class="hljs-attribute">listen</span> <span class="hljs-number">9999</span>;
    <span class="hljs-attribute">server_name</span> localhost;
    <span class="hljs-attribute">location</span> /api/user {
        <span class="hljs-attribute">proxy_pass</span> http://user/api/user;
    }
    <span class="hljs-attribute">location</span> /api/content {
        <span class="hljs-attribute">proxy_pass</span> http://content/api/content;
    }
    <span class="hljs-attribute">location</span> /api/discussion {
        <span class="hljs-attribute">proxy_pass</span> http://discussion/api/discussion;
    }
    <span class="hljs-attribute">location</span> /api/payment {
        <span class="hljs-attribute">proxy_pass</span> http://payment/api/payment;
    }
    <span class="hljs-attribute">location</span> /api/read {
        <span class="hljs-attribute">proxy_pass</span> http://read/api/read;
    }
    <span class="hljs-attribute">location</span> /api/subscription {
        <span class="hljs-attribute">proxy_pass</span> http://subscription/api/subscription;
    }
    <span class="hljs-attribute">location</span> /api/card {
        <span class="hljs-attribute">proxy_pass</span> http://card/api/card;
    }
}
</code></pre><p>Here, every service has a different request URI, and all will be routed accordingly.</p>
<p>server_name is your domain name(if you have configured it previously), for me it’s <strong><em>localhost</em></strong>.</p>
<p>And test the syntax:</p>
<pre><code>$ sudo nginx -t
</code></pre><p>Output:</p>
<pre><code>nginx: the configuration file <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>nginx<span class="hljs-operator">/</span>nginx.conf syntax <span class="hljs-keyword">is</span> ok
nginx: configuration file <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>nginx<span class="hljs-operator">/</span>nginx.conf test <span class="hljs-keyword">is</span> successful
Now, restart the Nginx service:
</code></pre><pre><code>$ sudo service nginx <span class="hljs-keyword">restart</span>
</code></pre><p>Go to the browser and run http://10.10.5.55:9999/. You’ll get successful output. (Change 10.10.5.55 with your server’s IP or domain name)</p>
<p>In my case:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657261916751/cxQHhQTK9.png" alt="Selection_355.png" /></p>
<p>And that’s all for Nginx Loadbalancer(reverse-proxy) configuration.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>Here you go, you have successfully configured <strong>GitLab server, GitLab Runner, CI/CD for dockerized microservices, and NGINX as a load balancer(reverse-proxy)</strong> at once.</p>
<p>Explore more blogs of mine: <a target="_blank" href="https://scanskill.com/profile/sagar">https://scanskill.com/profile/sagar/</a></p>
<p>Thank you!</p>
]]></content:encoded></item><item><title><![CDATA[Django, Postgres, Gunicorn, Nginx with Docker (Part-2)]]></title><description><![CDATA[Continued...  part-1
Gunicorn
﻿Now, install Gunicorn. It's production grade WSGI server.
For now, since we want to use default django's built-in server, create production compose file:
version: '3.5'

services:
    app:
        build:
            con...]]></description><link>https://blog.budhathokisagar.com.np/django-postgres-gunicorn-nginx-with-docker-part-2</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/django-postgres-gunicorn-nginx-with-docker-part-2</guid><category><![CDATA[Django]]></category><category><![CDATA[Docker]]></category><category><![CDATA[nginx]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Sat, 25 Dec 2021 16:23:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1657263005867/HigcYRqrV.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Continued...  <a target="_blank" href="https://blog.budhathokisagar.com.np/django-with-docker-postgres-gunicorn-and-nginxpart-1">part-1</a></p>
<h2 id="heading-gunicorn">Gunicorn</h2>
<p>﻿Now, install Gunicorn. It's production grade WSGI server.</p>
<p>For now, since we want to use default django's built-in server, create production compose file:</p>
<pre><code><span class="hljs-attribute">version</span>: '3.5'

<span class="solidity">services:
    app:
        build:
            context: .
        command: gunicorn personal.wsgi:application <span class="hljs-operator">-</span><span class="hljs-operator">-</span>bind <span class="hljs-number">0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8000</span>
        volumes:
            <span class="hljs-operator">-</span> static_data:<span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>static
        ports:
            <span class="hljs-operator">-</span> <span class="hljs-string">"8000:8000"</span>
        restart: always
        env_file:
            <span class="hljs-operator">-</span> .env.prod
        depends_on:
            <span class="hljs-operator">-</span> app<span class="hljs-operator">-</span>db

    app<span class="hljs-operator">-</span>db:
        image: postgres:<span class="hljs-number">12</span><span class="hljs-operator">-</span>alpine
        ports:
            <span class="hljs-operator">-</span> <span class="hljs-string">"5432:5432"</span>
        restart: always
        volumes:
            <span class="hljs-operator">-</span> postgres_data:<span class="hljs-operator">/</span><span class="hljs-keyword">var</span><span class="hljs-operator">/</span>lib<span class="hljs-operator">/</span>postgresql<span class="hljs-operator">/</span>data:rw
        env_file:
            <span class="hljs-operator">-</span> .env.prod
volumes:
    static_data:
    postgres_data:</span>
</code></pre><p>Here, we're using commang gunicorn instead of django server command. we can static_data volume as it's not needed in production. For now, let's create .env.prod file for environemental variables:</p>
<pre><code>DEBUG<span class="hljs-operator">=</span><span class="hljs-number">0</span>
DJANGO_ALLOWED_HOSTS<span class="hljs-operator">=</span>localhost <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span> [::<span class="hljs-number">1</span>]
DB_ENGINE<span class="hljs-operator">=</span>django.db.backends.postgresql_psycopg2
POSTGRES_HOST_AUTH_METHOD<span class="hljs-operator">=</span>trust
POSTGRES_USER<span class="hljs-operator">=</span>sagar
POSTGRES_PASSWORD<span class="hljs-operator">=</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>
POSTGRES_DB<span class="hljs-operator">=</span>portfolio_db_prod
POSTGRES_HOST<span class="hljs-operator">=</span>app<span class="hljs-operator">-</span>db
POSTGRES_PORT<span class="hljs-operator">=</span><span class="hljs-number">5432</span>
</code></pre><p>Add both files to .gitignore file if you want to keep them out from version control. Now, down all containers with -v flag, -v flag removes associated volumes:</p>
<pre><code>$ docker<span class="hljs-operator">-</span>compose down <span class="hljs-operator">-</span>v
</code></pre><p>Then, re-build images and run the containers:</p>
<pre><code>$ docker<span class="hljs-operator">-</span>compose <span class="hljs-operator">-</span>f docker<span class="hljs-operator">-</span>compose.prod.yml up <span class="hljs-operator">-</span><span class="hljs-operator">-</span>build
</code></pre><p>Run with -d flag if you wan't to run services in background. If any error when running, check errors with command:</p>
<pre><code>$ docker<span class="hljs-operator">-</span>compose <span class="hljs-operator">-</span>f docker<span class="hljs-operator">-</span>compose.prod.yml logs <span class="hljs-operator">-</span>f
</code></pre><p>Wow, let's create production Dockerfile as Dockerfile.prod with production entrypoint.prod.sh file inside scripts directory of the root. entrypoint.prod.sh script file:</p>
<pre><code><span class="hljs-meta">#!/bin/sh</span>

<span class="hljs-keyword">if</span> [ <span class="hljs-string">"<span class="hljs-variable">$DATABASE</span>"</span> = <span class="hljs-string">"postgres"</span> ]
<span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Waiting for postgres..."</span>
    <span class="hljs-keyword">while</span> ! nc -z <span class="hljs-string">"<span class="hljs-variable">$POSTGRES_HOST</span>"</span> <span class="hljs-string">"<span class="hljs-variable">$POSTGRES_PORT</span>"</span>; <span class="hljs-keyword">do</span>
      sleep 0.1
    <span class="hljs-keyword">done</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"PostgreSQL started"</span>
<span class="hljs-keyword">fi</span>

<span class="hljs-built_in">exec</span> <span class="hljs-string">"<span class="hljs-variable">$@</span>"</span>
</code></pre><p>Dockerfile.prod file with scripts permission:</p>
<pre><code><span class="hljs-keyword">FROM</span> python:<span class="hljs-number">3.8</span><span class="hljs-number">.9</span>-alpine <span class="hljs-keyword">as</span> builder

ENV PYTHONDONTWRITEBYTECODE <span class="hljs-number">1</span>
ENV PYTHONNUNBUFFERED <span class="hljs-number">1</span>

RUN apk <span class="hljs-keyword">update</span>
RUN apk <span class="hljs-keyword">add</span> postgresql-dev gcc python3-dev musl-dev libc-dev linux-headers

RUN apk <span class="hljs-keyword">add</span> jpeg-dev zlib-dev libjpeg

RUN pip install <span class="hljs-comment">--upgrade pip</span>
<span class="hljs-keyword">COPY</span> ./requirements.txt .
RUN pip wheel <span class="hljs-comment">--no-cache-dir --no-deps --wheel-dir /wheels -r requirements.txt</span>

#### FINAL ####

<span class="hljs-keyword">FROM</span> python:<span class="hljs-number">3.8</span><span class="hljs-number">.9</span>-alpine

RUN mkdir /app
<span class="hljs-keyword">COPY</span> . /app
WORKDIR /app

RUN apk <span class="hljs-keyword">update</span> &amp;&amp; apk <span class="hljs-keyword">add</span> libpq
<span class="hljs-keyword">COPY</span> <span class="hljs-comment">--from=builder ./wheels /wheels</span>
<span class="hljs-keyword">COPY</span> <span class="hljs-comment">--from=builder ./requirements.txt .</span>
RUN pip install <span class="hljs-comment">--no-cache /wheels/*</span>
<span class="hljs-meta">#RUN pip install -r requirements.txt</span>

<span class="hljs-keyword">COPY</span> ./scripts /scripts
RUN chmod +x /scripts/

RUN mkdir -p /vol/media
RUN mkdir -p /vol/static
RUN chmod -R <span class="hljs-number">755</span> /vol

ENTRYPOINT ["/scripts/entrypoint.prod.sh"]
</code></pre><p>Here we used a multi-stage build as it reduces final image size. 'builder' is a temporary image that's used just to build python wheels with dependencies, that are copied to the Final stage. we can create a non-root user. Because that is the best practice to be safe from attackers. Now, update the compose production file with docker production file:</p>
<pre><code><span class="hljs-attribute">version</span>: '3.5'

<span class="solidity">services:
    app:
        build:
            context: .
            dockerfile: Dockerfile.prod
        command: gunicorn personal.wsgi:application <span class="hljs-operator">-</span><span class="hljs-operator">-</span>bind <span class="hljs-number">0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8000</span>
        volumes:
            <span class="hljs-operator">-</span> static_data:<span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>static
        expose:
            <span class="hljs-operator">-</span> <span class="hljs-string">"8000:8000"</span>
        restart: always
        env_file:
            <span class="hljs-operator">-</span> .env.prod
        depends_on:
            <span class="hljs-operator">-</span> app<span class="hljs-operator">-</span>db

    app<span class="hljs-operator">-</span>db:
        image: postgres:<span class="hljs-number">12</span><span class="hljs-operator">-</span>alpine
        ports:
            <span class="hljs-operator">-</span> <span class="hljs-string">"5432:5432"</span>
        restart: always
        volumes:
            <span class="hljs-operator">-</span> postgres_data:<span class="hljs-operator">/</span><span class="hljs-keyword">var</span><span class="hljs-operator">/</span>lib<span class="hljs-operator">/</span>postgresql<span class="hljs-operator">/</span>data:rw
        env_file:
            <span class="hljs-operator">-</span> .env.prod
volumes:
    static_data:
    postgres_data:</span>
</code></pre><p>Rebuild, and run:</p>
<pre><code>$ docker<span class="hljs-operator">-</span>compose <span class="hljs-operator">-</span>f docker<span class="hljs-operator">-</span>compose.prod.yml down <span class="hljs-operator">-</span>v
$ docker<span class="hljs-operator">-</span>compose <span class="hljs-operator">-</span>f docker<span class="hljs-operator">-</span>compose.prod.yml up <span class="hljs-operator">-</span>d <span class="hljs-operator">-</span><span class="hljs-operator">-</span>build
$ docker<span class="hljs-operator">-</span>compose <span class="hljs-operator">-</span>f docker<span class="hljs-operator">-</span>compose.prod.yml exec app python manage.py migrate <span class="hljs-operator">-</span><span class="hljs-operator">-</span>noinput
</code></pre><h2 id="heading-ngnix">Ngnix</h2>
<p>Nginx really gives you the ultimate power. You can do whatever you want. Let's add Nginx to act as a reverse proxy for Gunicorn. Add service on docker-compose file (production):</p>
<pre><code><span class="hljs-attribute">version</span>: '3.5'

<span class="solidity">services:
    app:
        build:
            context: .
            dockerfile: Dockerfile.prod
        command: gunicorn personal.wsgi:application <span class="hljs-operator">-</span><span class="hljs-operator">-</span>bind <span class="hljs-number">0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8000</span>
        volumes:
            <span class="hljs-operator">-</span> static_data:<span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>static
            <span class="hljs-operator">-</span> media_data: <span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>media
        ports:
            <span class="hljs-operator">-</span> <span class="hljs-string">"8000:8000"</span>
        restart: always
        env_file:
            <span class="hljs-operator">-</span> .env.prod
        depends_on:
            <span class="hljs-operator">-</span> app<span class="hljs-operator">-</span>db

    app<span class="hljs-operator">-</span>db:
        image: postgres:<span class="hljs-number">12</span><span class="hljs-operator">-</span>alpine
        ports:
            <span class="hljs-operator">-</span> <span class="hljs-string">"5432:5432"</span>
        restart: always
        volumes:
            <span class="hljs-operator">-</span> postgres_data:<span class="hljs-operator">/</span><span class="hljs-keyword">var</span><span class="hljs-operator">/</span>lib<span class="hljs-operator">/</span>postgresql<span class="hljs-operator">/</span>data:rw
        env_file:
            <span class="hljs-operator">-</span> .env.prod

    proxy:
        build: ./proxy
        volumes:
            <span class="hljs-operator">-</span> static_data:<span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>static
            <span class="hljs-operator">-</span> media_data:<span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>media
        restart: always
        ports:
            <span class="hljs-operator">-</span> <span class="hljs-string">"8008:80"</span>
        depends_on:
            <span class="hljs-operator">-</span> app
volumes:
    static_data:
    media_data:
    postgres_data:</span>
</code></pre><p>Inside root directory create a proxy(whatever you want to name it) directory and add a configuration file, in my case I have created default.conf file as:</p>
<pre><code>server {
    listen <span class="hljs-number">80</span>;

    location <span class="hljs-operator">/</span>static {
        alias <span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>static;
    }

    location <span class="hljs-operator">/</span>media {
        alias <span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>media;
    }

    location <span class="hljs-operator">/</span> {
        uwsgi_pass app:<span class="hljs-number">8000</span>;
        include <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>nginx<span class="hljs-operator">/</span>uwsgi_params;
    }
}
</code></pre><p>And create uwsgi_params file for this.</p>
<pre><code><span class="hljs-attribute">uwsgi_param</span> QUERY_STRING <span class="hljs-variable">$query_string</span>;
<span class="hljs-attribute">uwsgi_param</span> REQUEST_METHOD <span class="hljs-variable">$request_method</span>;
<span class="hljs-attribute">uwsgi_param</span> CONTENT_TYPE <span class="hljs-variable">$content_type</span>;
<span class="hljs-attribute">uwsgi_param</span> CONTENT_LENGTH <span class="hljs-variable">$content_length</span>;
<span class="hljs-attribute">uwsgi_param</span> REQUEST_URI <span class="hljs-variable">$request_uri</span>;
<span class="hljs-attribute">uwsgi_param</span> PATH_INFO <span class="hljs-variable">$document_uri</span>;
<span class="hljs-attribute">uwsgi_param</span> DOCUMENT_ROOT <span class="hljs-variable">$document_root</span>;
<span class="hljs-attribute">uwsgi_param</span> SERVER_PROTOCOL <span class="hljs-variable">$server_protocol</span>;
<span class="hljs-attribute">uwsgi_param</span> REMOTE_ADDR <span class="hljs-variable">$remote_addr</span>;
<span class="hljs-attribute">uwsgi_param</span> REMOTE_PORT <span class="hljs-variable">$remote_port</span>;
<span class="hljs-attribute">uwsgi_param</span> SERVER_ADDR <span class="hljs-variable">$server_addr</span>;
<span class="hljs-attribute">uwsgi_param</span> SERVER_PORT <span class="hljs-variable">$server_port</span>;
<span class="hljs-attribute">uwsgi_param</span> SERVER_NAME <span class="hljs-variable">$server_name</span>;
</code></pre><p>Also, add a Dockerfile inside the proxy directory for Nginx configuration:</p>
<pre><code>FROM nginxinc<span class="hljs-operator">/</span>nginx<span class="hljs-operator">-</span>unprivileged:<span class="hljs-number">1</span><span class="hljs-operator">-</span>alpine

COPY ./default.conf <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>nginx<span class="hljs-operator">/</span>conf.d/default.conf
COPY uwsgi_params <span class="hljs-operator">/</span>etc<span class="hljs-operator">/</span>nginx<span class="hljs-operator">/</span>uwsgi_params
</code></pre><p>You can use expose instead of ports in docker-compose.prod.yml file for app service:</p>
<pre><code>app:
        build:
            context: .
            dockerfile: Dockerfile.prod
        command: gunicorn personal.wsgi:application <span class="hljs-operator">-</span><span class="hljs-operator">-</span>bind <span class="hljs-number">0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8000</span>
        volumes:
            <span class="hljs-operator">-</span> static_data:<span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>static
            <span class="hljs-operator">-</span> media_data:<span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>media
        expose:
            <span class="hljs-operator">-</span> <span class="hljs-number">8000</span>
        restart: always
        env_file:
            <span class="hljs-operator">-</span> .env.prod
        depends_on:
            <span class="hljs-operator">-</span> app<span class="hljs-operator">-</span>db
</code></pre><p>Again, re-build run and try:</p>
<pre><code>$ docker<span class="hljs-operator">-</span>compose <span class="hljs-operator">-</span>f docker<span class="hljs-operator">-</span>compose.prod.yml down <span class="hljs-operator">-</span>v
$ docker<span class="hljs-operator">-</span>compose <span class="hljs-operator">-</span>f docker<span class="hljs-operator">-</span>compose.prod.yml up <span class="hljs-operator">-</span>d <span class="hljs-operator">-</span><span class="hljs-operator">-</span>build
$ docker<span class="hljs-operator">-</span>compose <span class="hljs-operator">-</span>f docker<span class="hljs-operator">-</span>compose.prod.yml exec web python manage.py migrate <span class="hljs-operator">-</span><span class="hljs-operator">-</span>noinput
$ docker<span class="hljs-operator">-</span>compose <span class="hljs-operator">-</span>f docker<span class="hljs-operator">-</span>compose.prod.yml exec web python manage.py collectstatic <span class="hljs-operator">-</span><span class="hljs-operator">-</span>no<span class="hljs-operator">-</span>input <span class="hljs-operator">-</span><span class="hljs-operator">-</span>clear
</code></pre><p>Ensure the app is running in http://localhost:8008.</p>
<p>That's it.</p>
<p>Thank You!</p>
<p>Previous:  <a target="_blank" href="https://blog.budhathokisagar.com.np/django-with-docker-postgres-gunicorn-and-nginxpart-1">Part-1</a> </p>
]]></content:encoded></item><item><title><![CDATA[Django with Docker, Postgres, Gunicorn, and Nginx(Part-1)]]></title><description><![CDATA[In this, we'll be deploying a Django application with docker, postgres, gunicorn and nginx configurations.
﻿Prerequisites
﻿First, ensure the following is installed on your machine:

Python 3.7 or higher(I've used python 3.8.9)
Python pip
Git and a Gi...]]></description><link>https://blog.budhathokisagar.com.np/django-with-docker-postgres-gunicorn-and-nginxpart-1</link><guid isPermaLink="true">https://blog.budhathokisagar.com.np/django-with-docker-postgres-gunicorn-and-nginxpart-1</guid><category><![CDATA[Python]]></category><category><![CDATA[Django]]></category><category><![CDATA[Docker]]></category><category><![CDATA[guide]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Sagar Budhathoki]]></dc:creator><pubDate>Sat, 25 Dec 2021 14:40:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1657263106527/bL7CAaBlP.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this, we'll be deploying a Django application with docker, postgres, gunicorn and nginx configurations.</p>
<h2 id="heading-prerequisites">﻿Prerequisites</h2>
<p>﻿First, ensure the following is installed on your machine:</p>
<ul>
<li>Python 3.7 or higher(I've used python 3.8.9)</li>
<li>Python pip</li>
<li>Git and a GitHub account</li>
<li>Docker and docker-compose</li>
</ul>
<p>Let's jump directly to dockerization of Django web application. I'm sure you have Django project set up on your system. </p>
<h2 id="heading-docker">Docker</h2>
<p>After installation of docker, add a Dockerfile to the root directory of your project:</p>
<pre><code>FROM python:<span class="hljs-number">3.8</span><span class="hljs-number">.9</span><span class="hljs-operator">-</span>alpine

WORKDIR <span class="hljs-operator">/</span>app

ENV PYTHONDONTWRITEBYTECODE <span class="hljs-number">1</span>
ENV PYTHONNUNBUFFERED <span class="hljs-number">1</span>

RUN pip install <span class="hljs-operator">-</span><span class="hljs-operator">-</span>upgrade pip
COPY ./requirements.txt .

RUN pip install <span class="hljs-operator">-</span>r requirements.txt

COPY . .
</code></pre><p>Here, we used an alpine-based docker image for python 3.8.9. Then we set two environmental variables:</p>
<ul>
<li>PYTHONDONTWRITEBYTECODE (which prevents writing pyc files)</li>
<li>PYTHONUNBUFFERED (which prevents buffering stdout and stderr)</li>
</ul>
<p>And, we updated the pip version and copied the requirements.txt file to the working directory, and installed requirements. After that, we finally copied our project to the working directory(/app).</p>
<p>﻿Now, create a docker-compose.yml file to the project root and add services:</p>
<pre><code><span class="hljs-attribute">version</span>: '3.5'

<span class="solidity">services:
    app:
        build: .
        command: python manage.py runserver <span class="hljs-number">0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>:<span class="hljs-number">8000</span>
        volumes:
            <span class="hljs-operator">-</span> static_data:<span class="hljs-operator">/</span>vol<span class="hljs-operator">/</span>web
        ports:
            <span class="hljs-operator">-</span> <span class="hljs-string">"8000:8000"</span>
        restart: always
        env_file:
            <span class="hljs-operator">-</span> ./.env</span>
</code></pre><p>﻿Create .env file at the root (the same directory containing docker-compose.yml) and edit as:</p>
<pre><code><span class="hljs-attr">DEBUG</span>=<span class="hljs-number">1</span>
<span class="hljs-attr">SECRET_KEY</span>=foo
<span class="hljs-attr">DJANGO_ALLOWED_HOSTS</span>=localhost <span class="hljs-number">127.0</span>.<span class="hljs-number">0.1</span> [::]
</code></pre><p>﻿Update DEBUG, ALLOWED_HOSTS variables in settings.py:</p>
<pre><code>DEBUG <span class="hljs-operator">=</span> <span class="hljs-keyword">int</span>(os.environ.get(<span class="hljs-string">"DEBUG"</span>, default<span class="hljs-operator">=</span><span class="hljs-number">0</span>))
ALLOWED_HOSTS <span class="hljs-operator">=</span> os.environ.get(<span class="hljs-string">"DJANGO_ALLOWED_HOSTS"</span>).split(<span class="hljs-string">" "</span>)
</code></pre><p><em>'DJANGO_ALLOWED_HOSTS' should be a single string of hosts with a space between each.
For example: 'DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]'</em></p>
<p>In docker-compose file, build: . means it will build image from the root Dockerfile we have created before.</p>
<p>Now, build the image:</p>
<pre><code>$ docker-compose build
</code></pre><p>Use sudo if needed.
Run the container once the image is built:</p>
<pre><code>$ docker<span class="hljs-operator">-</span>compose up <span class="hljs-operator">-</span>d
</code></pre><h2 id="heading-postgres">Postgres</h2>
<p>﻿Add new service to the docker-compose.yml file, and update django database settings, with  <a target="_blank" href="http://initd.org/psycopg/">Psycopg2</a>. 
Lets add new service named as app-db:</p>
<pre><code><span class="hljs-attribute">version</span>: '3.5'

<span class="yaml"><span class="hljs-attr">services:</span>
    <span class="hljs-attr">app:</span>
        <span class="hljs-attr">build:</span> <span class="hljs-string">.</span>
        <span class="hljs-attr">command:</span> <span class="hljs-string">python</span> <span class="hljs-string">manage.py</span> <span class="hljs-string">runserver</span> <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">:8000</span>
        <span class="hljs-attr">volumes:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">static_data:/vol/web</span>
        <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">"8000:8000"</span>
        <span class="hljs-attr">restart:</span> <span class="hljs-string">always</span>
        <span class="hljs-attr">env_file:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">./.env</span>
        <span class="hljs-attr">depends_on:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">app-db</span>

    <span class="hljs-attr">app-db:</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">postgres:12-alpine</span>
        <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">"5432:5432"</span>
        <span class="hljs-attr">restart:</span> <span class="hljs-string">always</span>
        <span class="hljs-attr">volumes:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">postgres_data:/var/lib/postgresql/data:rw</span>
        <span class="hljs-attr">env_file:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">.env</span>
<span class="hljs-comment"># you can also use environmental variables directly as following:</span>
<span class="hljs-comment">#(remember variables for postgres should be named exactly as given below)</span>
<span class="hljs-comment">#        environment:</span>
<span class="hljs-comment">#            - POSTGRES_HOST_AUTH_METHOD=trust</span>
<span class="hljs-comment">#            - POSTGRES_USER:sagar</span>
<span class="hljs-comment">#            - POSTGRES_PASSWORD:********</span>
<span class="hljs-comment">#            - POSTGRES_DB:portfolio_db</span>
<span class="hljs-comment">#            - TZ:Asia/Kathmandu</span></span>
</code></pre><p>We will just use the official Postgres docker image and postgres_data is the persistent data volume within docker. It should suffice</p>
<pre><code>DEBUG<span class="hljs-operator">=</span><span class="hljs-number">1</span>
DJANGO_ALLOWED_HOSTS<span class="hljs-operator">=</span>localhost <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span> [::<span class="hljs-number">1</span>]
POSTGRES_HOST_AUTH_METHOD<span class="hljs-operator">=</span>trust
POSTGRES_USER<span class="hljs-operator">=</span>sagar
POSTGRES_PASSWORD<span class="hljs-operator">=</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span><span class="hljs-operator">*</span>
POSTGRES_DB<span class="hljs-operator">=</span>portfolio_db
POSTGRES_HOST<span class="hljs-operator">=</span>app<span class="hljs-operator">-</span>db
POSTGRES_PORT<span class="hljs-operator">=</span><span class="hljs-number">5432</span>
</code></pre><p>Update the DATABASES dict in settings.py:</p>
<pre><code>DATABASES <span class="hljs-operator">=</span> {
    <span class="hljs-string">'default'</span>: {
        <span class="hljs-string">'ENGINE'</span>: os.environ.get(<span class="hljs-string">"DB_ENGINE"</span>, <span class="hljs-string">"django.db.backends.sqlite3"</span>),
        <span class="hljs-string">'NAME'</span>: os.environ.get(<span class="hljs-string">"POSTGRES_DB"</span>, os.path.join(BASE_DIR, <span class="hljs-string">"db.sqlite3"</span>)),
        <span class="hljs-string">'USER'</span>: os.environ.get(<span class="hljs-string">"POSTGRES_USER"</span>, <span class="hljs-string">"default_user"</span>),
        <span class="hljs-string">'PASSWORD'</span>: os.environ.get(<span class="hljs-string">"POSTGRES_PASSWORD"</span>, <span class="hljs-string">"default_password"</span>),
        <span class="hljs-string">'HOST'</span>: os.environ.get(<span class="hljs-string">"POSTGRES_HOST"</span>, <span class="hljs-string">"localhost"</span>),
        <span class="hljs-string">'PORT'</span>: os.environ.get(<span class="hljs-string">"POSTGRES_PORT"</span>, <span class="hljs-string">"5432"</span>),

    }
}
</code></pre><p>Here, the database is configured based on the environment variables that we just defined. Take note of the default values. Update the Dockerfile to install the appropriate packages required for Psycopg2:</p>
<pre><code><span class="hljs-keyword">From</span> python:<span class="hljs-number">3.8</span><span class="hljs-number">.9</span>-alpine

WORKDIR /app

PYTHONDONTWRITEBYTECODE <span class="hljs-number">1</span>
ENV PYTHONNUNBUFFERED <span class="hljs-number">1</span>

<span class="hljs-meta">#psycopg2 dependencies installation</span>
RUN apk <span class="hljs-keyword">update</span>
RUN apk <span class="hljs-keyword">add</span> postgresql-dev gcc python3-dev musl-dev libc-dev linux-headers

RUN pip install <span class="hljs-comment">--upgrade pip</span>
<span class="hljs-keyword">COPY</span> ./requirements.txt .

RUN pip install -r requirements.txt

<span class="hljs-keyword">COPY</span> . .
</code></pre><p>Add Psycopg2 to requirements.txt. Make sure everytime you install packages, they are added to requirements.txt file. (pip freeze &gt; requirements.txt)</p>
<p>Build the new image with two services:</p>
<pre><code>$ docker<span class="hljs-operator">-</span>compose up <span class="hljs-operator">-</span>d <span class="hljs-operator">-</span><span class="hljs-operator">-</span>build
</code></pre><p>Then run the migrations:</p>
<pre><code>$ docker<span class="hljs-operator">-</span>compose exec app python manage.py migrate <span class="hljs-operator">-</span><span class="hljs-operator">-</span>noinput
</code></pre><pre><code>Operations to perform:
  Apply all migrations: admin, auth, blogs, contenttypes, django_summernote, portfolio, sessions, works
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying admin.0003_logentry_add_action_flag_choices... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK
  Applying auth.0010_alter_group_name_max_length... OK
  Applying auth.0011_update_proxy_permissions... OK
  Applying blogs.0001_initial... OK
  Applying django_summernote.0001_initial... OK
  Applying django_summernote.0002_update-help_text... OK
  Applying portfolio.0001_initial... OK
  Applying sessions.0001_initial... OK
  Applying works.0001_initial... OK
  Applying works.0002_auto_20200325_1330... OK
  Applying works.0003_auto_20200325_1411... OK
  Applying works.0004_auto_20200325_1413... OK
  Applying works.0005_auto_20200325_1417... OK
  Applying works.0006_remove_work_image... OK
  Applying works.0007_work_image... OK
</code></pre><p>If any error, run docker-compose down -v to remove the volumes along with the containers. Then re-build and run migrations.</p>
<p>Ensure database tables are created:</p>
<pre><code>$ docker<span class="hljs-operator">-</span>compose exec app<span class="hljs-operator">-</span>db psql <span class="hljs-operator">-</span><span class="hljs-operator">-</span>username<span class="hljs-operator">=</span>user <span class="hljs-operator">-</span><span class="hljs-operator">-</span>dbname<span class="hljs-operator">=</span>portfolio_db
</code></pre><pre><code>$ sudo docker-compose exec app-db psql <span class="hljs-comment">--username=sagar --dbname=portfolio_db</span>
psql (<span class="hljs-number">12.7</span>)
<span class="hljs-keyword">Type</span> "help" <span class="hljs-keyword">for</span> help.

portfolio_db=# \c portfolio_db
You are now connected <span class="hljs-keyword">to</span> <span class="hljs-keyword">database</span> "portfolio_db" <span class="hljs-keyword">as</span> <span class="hljs-keyword">user</span> "sagar".
portfolio_db=# \l
                               List <span class="hljs-keyword">of</span> databases
     <span class="hljs-type">Name</span>     | <span class="hljs-keyword">Owner</span> | <span class="hljs-keyword">Encoding</span> |  <span class="hljs-keyword">Collate</span>   |   Ctype    | <span class="hljs-keyword">Access</span> <span class="hljs-keyword">privileges</span> 
<span class="hljs-comment">--------------+-------+----------+------------+------------+-------------------</span>
 portfolio_db | sagar | UTF8     | en_US.utf8 | en_US.utf8 | 
 postgres     | sagar | UTF8     | en_US.utf8 | en_US.utf8 | 
 template0    | sagar | UTF8     | en_US.utf8 | en_US.utf8 | =c/sagar         +
              |       |          |            |            | sagar=CTc/sagar
 template1    | sagar | UTF8     | en_US.utf8 | en_US.utf8 | =c/sagar         +
              |       |          |            |            | sagar=CTc/sagar
(<span class="hljs-number">4</span> <span class="hljs-keyword">rows</span>)

portfolio_db=# \dt
                   List <span class="hljs-keyword">of</span> relations
 <span class="hljs-keyword">Schema</span> |             <span class="hljs-type">Name</span>             | <span class="hljs-keyword">Type</span>  | <span class="hljs-keyword">Owner</span> 
<span class="hljs-comment">--------+------------------------------+-------+-------</span>
 <span class="hljs-built_in">public</span> | auth_group                   | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | auth_group_permissions       | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | auth_permission              | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | auth_user                    | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | auth_user_groups             | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | auth_user_user_permissions   | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | blogs_category_post          | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | blogs_comment                | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | blogs_post                   | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | blogs_post_categories        | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | django_admin_log             | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | django_content_type          | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | django_migrations            | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | django_session               | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | django_summernote_attachment | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | portfolio_contact            | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | works_category_work          | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | works_work                   | <span class="hljs-keyword">table</span> | sagar
 <span class="hljs-built_in">public</span> | works_work_categories        | <span class="hljs-keyword">table</span> | sagar
(<span class="hljs-number">19</span> <span class="hljs-keyword">rows</span>)

portfolio_db=#
</code></pre><p>Now add entrypoint.sh script inside scripts directory:</p>
<pre><code><span class="hljs-meta">#!/bin/sh</span>

<span class="hljs-keyword">if</span> [ <span class="hljs-string">"<span class="hljs-variable">$DATABASE</span>"</span> = <span class="hljs-string">"postgres"</span> ]
<span class="hljs-keyword">then</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Waiting for postgres..."</span>
    <span class="hljs-keyword">while</span> ! nc -z <span class="hljs-string">"<span class="hljs-variable">$POSTGRES_HOST</span>"</span> <span class="hljs-string">"<span class="hljs-variable">$POSTGRES_PORT</span>"</span>; <span class="hljs-keyword">do</span>
      sleep 0.1
    <span class="hljs-keyword">done</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"PostgreSQL started"</span>
<span class="hljs-keyword">fi</span>

<span class="hljs-comment"># It's okay to run the following two, flush and migrate commands on development mode(when debug mode is on) but not recommended</span>
<span class="hljs-comment"># for production:</span>

<span class="hljs-comment"># python manage.py flush --no-input</span>
<span class="hljs-comment"># python manage.py migrate</span>

<span class="hljs-built_in">exec</span> <span class="hljs-string">"<span class="hljs-variable">$@</span>"</span>
</code></pre><p>Update Dockerfile with file permissions, and also add DATABASE variable to .env file.</p>
<pre><code><span class="hljs-attribute">From</span> python:<span class="hljs-number">3</span>.<span class="hljs-number">8</span>.<span class="hljs-number">9</span>-alpine

<span class="hljs-attribute">WORKDIR</span> /app

<span class="hljs-attribute">PYTHONDONTWRITEBYTECODE</span> <span class="hljs-number">1</span>
<span class="hljs-attribute">ENV</span> PYTHONNUNBUFFERED <span class="hljs-number">1</span>

<span class="hljs-comment">#psycopg2 dependencies installation</span>
<span class="hljs-attribute">RUN</span> apk update
<span class="hljs-attribute">RUN</span> apk add postgresql-dev gcc python<span class="hljs-number">3</span>-dev musl-dev libc-dev linux-headers

<span class="hljs-attribute">RUN</span> pip install --upgrade pip
<span class="hljs-attribute">COPY</span> ./requirements.txt .

<span class="hljs-attribute">RUN</span> pip install -r requirements.txt

<span class="hljs-attribute">COPY</span> . .
<span class="hljs-attribute">COPY</span> ./scripts /scripts

<span class="hljs-attribute">RUN</span> chmod +x /scripts/*

<span class="hljs-attribute">RUN</span> mkdir -p /vol/web/media
<span class="hljs-attribute">RUN</span> mkdir -p /vol/web/static

<span class="hljs-attribute">RUN</span> chmod -R <span class="hljs-number">755</span> /vol/web

<span class="hljs-attribute">ENTRYPOINT</span><span class="hljs-meta"> ["/scripts/entrypoint.sh"]</span>
</code></pre><p>Edit .env file:</p>
<pre><code><span class="hljs-attr">DEBUG</span>=<span class="hljs-number">1</span>
<span class="hljs-attr">DJANGO_ALLOWED_HOSTS</span>=localhost <span class="hljs-number">127.0</span>.<span class="hljs-number">0.1</span> [::<span class="hljs-number">1</span>]
<span class="hljs-attr">POSTGRES_HOST_AUTH_METHOD</span>=trust
<span class="hljs-attr">POSTGRES_USER</span>=user
<span class="hljs-attr">POSTGRES_PASSWORD</span>=password
<span class="hljs-attr">POSTGRES_DB</span>=portfolio_db
<span class="hljs-attr">POSTGRES_HOST</span>=app-db <span class="hljs-comment">#from docker-compose</span>
<span class="hljs-attr">POSTGRES_PORT</span>=<span class="hljs-number">5432</span>
<span class="hljs-attr">DATABASE</span>=postgres
</code></pre><p>Now, re-build, run and try http://localhost:8000/</p>
<p>Next: Django, Postgres, Gunicorn, Nginx with Docker ( <a target="_blank" href="https://blog.budhathokisagar.com.np/django-postgres-gunicorn-nginx-with-docker-part-2">Part-2</a> )</p>
]]></content:encoded></item></channel></rss>