<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Stack & Ship]]></title><description><![CDATA[Each blog discusses a concept that has helped me build better applications. Be it a new framework, a tool or some idea that brought a new way of thinking to bui]]></description><link>https://blog.shreehari.dev</link><generator>RSS for Node</generator><lastBuildDate>Sat, 18 Apr 2026 11:15:45 GMT</lastBuildDate><atom:link href="https://blog.shreehari.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Disk Space Battle: PNPM vs NPM]]></title><description><![CDATA[npm - Node Package Manager
pnpm - Performant Node Package Manager
The Problem
Every application is created to tackle a problem. Whether it's a small issue or a big one, if it doesn't address a specific problem, then it's hard to understand why it was...]]></description><link>https://blog.shreehari.dev/disk-space-battle-pnpm-vs-npm</link><guid isPermaLink="true">https://blog.shreehari.dev/disk-space-battle-pnpm-vs-npm</guid><category><![CDATA[Node.js]]></category><category><![CDATA[npm]]></category><category><![CDATA[pnpm]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Shreehari Acharya]]></dc:creator><pubDate>Thu, 22 Jan 2026 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769449069168/7da143d9-3ae5-468b-9a7c-020f7e23fbd0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><code>npm - Node Package Manager</code></p>
<p><code>pnpm - Performant Node Package Manager</code></p>
<h2 id="heading-the-problem">The Problem</h2>
<p>Every application is created to tackle a problem. Whether it's a small issue or a big one, if it doesn't address a specific problem, then it's hard to understand why it was made. pnpm was developed to fix a particular issue caused by the old npm. To illustrate this problem, let me share a meme with you.</p>
<p><img src="https://imgs.search.brave.com/qKlywmaOC75uJEIjEgbse0weoLDqIfW96PP82wmMStw/rs:fit:860:0:0:0/g:ce/aHR0cHM6Ly9oYWNr/YWRheS5jb20vd3At/Y29udGVudC91cGxv/YWRzLzIwMjEvMDgv/bm9kZV9tb2R1bGVz/LW1lbWUucG5nP3c9/NDAw" alt="Node_modules folder meme" class="image--center mx-auto" /></p>
<p><code>node_modules</code> has many files, so it uses a lot of space. Now, imagine you're working on several projects, and each has its own <code>node_modules</code>. It's not entirely different; for example, if two projects use the same version of React for the frontend, you'll have React <code>node_modules</code> in two places, taking up space.</p>
<p>That was just an example; in reality, there may be a lot of packages used in different projects, and every time you run <code>npm install</code>, a network call is made to download all of these again, even though you have one copy in another project. Wouldn’t it be great if we could somehow optimise this? And that’s where pnpm comes in!</p>
<h2 id="heading-the-solution">The Solution</h2>
<p>You might already have an idea of how to solve this problem: instead of downloading a module for every project, why not reuse it? Of course, creating a copy wouldn't work because we would still be losing space. The solution is to place the required module in a centralised location, and all projects that need it use links that point to the main folder containing the module's actual code. This works just like pointers, which store the address and can reference a variable through the address.</p>
<h3 id="heading-how-to-achieve-this">How To Achieve This?</h3>
<p>pnpm achieves this using hard links and symbolic links.</p>
<p>A hard link is an additional "name" for the same data on the disk. In Linux, every file is actually a link to an <code>inode</code> (the data's address). A hard link just creates a second pointer to that same address. If you delete the original file name, the data is not deleted. The data only disappears once all hard links to it are deleted. It cannot point to directories and cannot cross different partitions (because <code>inode</code> numbers are unique to each drive).</p>
<pre><code class="lang-bash">ln original.txt my_duplicate.txt <span class="hljs-comment"># creating a hard link or original.txt</span>
</code></pre>
<p>Even if you delete <code>original.txt</code>, you can still open <code>my_duplicate.txt</code> and see all your data. They are effectively the same file.</p>
<p>A symbolic link is like a "redirect" sign. It contains the text of a path (e.g., <code>../my-files/photo.jpg</code>). If you delete the original file, the symlink "breaks" because the path it points to is gone. It can point to directories and can cross over to different hard drives/partitions.</p>
<pre><code class="lang-bash">ln -s original.txt my_shortcut.txt
</code></pre>
<p>If you move <code>original.txt</code> to a different folder, <code>my_shortcut.txt</code> will stop working.</p>
<p>When you run <code>pnpm install express</code> The following steps take place in order</p>
<h3 id="heading-step-1-the-global-store-the-source">Step 1: The Global Store (The Source)</h3>
<p>When you request a package (e.g., <code>express</code>), pnpm first checks its <strong>Global Store</strong> (usually located at <code>~/.pnpm-store</code>).</p>
<ul>
<li><p>If the package isn't there, pnpm downloads it once.</p>
</li>
<li><p>This store is "content-addressable," meaning it saves every unique file based on its content, not its name. <code>express@latest</code> <code>express@4.1.3</code> are unique, so both of them will be stored separately.</p>
</li>
</ul>
<h3 id="heading-step-2-creating-the-virtual-store-hard-links">Step 2: Creating the Virtual Store (Hard Links)</h3>
<p>Instead of copying <code>express</code> into your project, pnpm creates a hidden folder: <code>node_modules/.pnpm</code>. This is called the <strong>Virtual Store</strong>.</p>
<ul>
<li><p><strong>The Action:</strong> pnpm creates <strong>Hard Links</strong> from the Global Store into this <code>.pnpm</code> folder.</p>
</li>
<li><p><strong>The Result:</strong> The files now "exist" in your project folder, but they take up <strong>zero extra disk space</strong> because they point to the same data on the disk as the Global Store.</p>
</li>
</ul>
<h3 id="heading-step-3-nesting-dependencies-symlinks">Step 3: Nesting Dependencies (Symlinks)</h3>
<p>Packages often have their own dependencies. For example, <code>express</code> needs <code>body-parser</code>.</p>
<ul>
<li><p><strong>The Action:</strong> Inside the <code>.pnpm</code> folder, pnpm creates <strong>Symlinks</strong> to connect packages to their own dependencies.</p>
</li>
<li><p><strong>The Result:</strong> This creates a massive, nested web of symlinks that satisfies exactly what each package needs to "see" to function.</p>
</li>
</ul>
<h3 id="heading-step-4-the-public-nodemodules-symlinks">Step 4: The "Public" node_modules (Symlinks)</h3>
<p>Now that the "hidden" <code>.pnpm</code> The folder is ready; pnpm needs to make the packages you actually asked for visible to your code.</p>
<ul>
<li><p><strong>The Action:</strong> pnpm creates a <strong>Symlink</strong> in the root of your <code>node_modules</code> project that points to the package buried inside the <code>.pnpm</code> virtual store.</p>
</li>
<li><p><strong>Example:</strong> <code>node_modules/express</code> —&gt; <code>.pnpm/express@4.18.2/node_modules/express</code></p>
</li>
</ul>
<h3 id="heading-step-5-resolution">Step 5: Resolution</h3>
<p>When you write <code>import express from 'express'</code> In your code:</p>
<ol>
<li><p>Node.js looks in <code>node_modules/express</code>.</p>
</li>
<li><p>It follows the <strong>Symlink</strong> into the <code>.pnpm</code> virtual store.</p>
</li>
<li><p>It finds the actual files, which are <strong>hard-linked</strong> to the Global Store.</p>
</li>
<li><p>Your app runs perfectly, and your SSD stays empty.</p>
</li>
</ol>
<h2 id="heading-the-problem-of-phantom-dependencies">The Problem of Phantom Dependencies</h2>
<p>In npm, dependencies are "hoisted" or flattened to the root of <code>node_modules</code>. If you install <code>express</code> and <code>express</code> depend on <code>debug</code>, you could technically <code>import 'debug'</code> use it in your code even if you didn't list it in your package.json.</p>
<p>This looks like a feature, but it can cause some random bugs, if in future <code>express</code> decided to drop <code>debug</code> and use some other package like <code>better-debug</code> Now, suddenly, for some reason, you get an error <code>module not found</code> because you never explicitly installed that package.</p>
<p>Because pnpm uses symlinks to only expose what is explicitly in your <code>package.json</code>, your code cannot access packages it doesn't officially depend on. This makes your builds much more predictable and secure.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This is just two of many benefits of pnpm, if your project follows a monorepo structure, then using pnpm can help you track and manage dependencies in a lot better way compared to npm.</p>
<p>It is significantly faster in automated environments like GitHub Actions or Jenkins. If a package has a "post-install" script (like building some C++ code or generating files), pnpm can cache the result of that script. Next time you install that package (even in a different project), pnpm just links the already-built result instead of running the slow build script again.</p>
<p>Every file in the pnpm store is verified using a checksum. This ensures that the code on your disk exactly matches what was published to npm, protecting you from "disk corruption" or "supply chain" tampering where a local file might have been modified.</p>
<h2 id="heading-thank-you">Thank you,</h2>
<p>X - <a target="_blank" href="https://x.com/06_Shreehari">https://x.com/06_Shreehari</a></p>
<p>LinkedIn - <a target="_blank" href="https://www.linkedin.com/in/shreehari-acharya/">https://www.linkedin.com/in/shreehari-acharya/</a></p>
<p>GitHub - <a target="_blank" href="https://github.com/Shreehari-Acharya">https://github.com/Shreehari-Acharya</a></p>
]]></content:encoded></item><item><title><![CDATA[Introduction To RAG - Retrieval-Augmented Generation]]></title><description><![CDATA[What is RAG ?
LLMs are great at answering questions because they have been trained on a lot of data. But how can we train them on our personal data? Personal data could be an organization's internal knowledge base or all the articles and blogs you ha...]]></description><link>https://blog.shreehari.dev/introduction-to-rag-retrieval-augmented-generation</link><guid isPermaLink="true">https://blog.shreehari.dev/introduction-to-rag-retrieval-augmented-generation</guid><category><![CDATA[RAG ]]></category><category><![CDATA[Retrieval-Augmented Generation]]></category><dc:creator><![CDATA[Shreehari Acharya]]></dc:creator><pubDate>Thu, 01 May 2025 14:31:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746109701099/dc914c92-039e-4e79-9959-390f44b7b8a9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-rag">What is RAG ?</h1>
<p>LLMs are great at answering questions because they have been trained on a lot of data. But how can we train them on our personal data? Personal data could be an organization's internal knowledge base or all the articles and blogs you have written. Injecting this personal data into Large Language Models (LLMs) and then getting answers back is what we can call RAG in the simplest terms.</p>
<h1 id="heading-why-rag">Why RAG ?</h1>
<blockquote>
<p>Can't we just paste our personal data into the prompt if we need answers about our data?</p>
</blockquote>
<p>Yes, you could, and that's an efficient method when your data is small and fits within the context window. (The context window refers to the maximum size of a prompt that can be sent.)</p>
<p>However, PDFs are often around 50-100 pages with lots of text. Can you paste all of that into the prompt? That would be very inefficient.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746103271474/54a1c703-0ac8-4491-aa73-d8f512e9075a.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>But I always got an answer when I uploaded a really big document. How did that happen?</p>
</blockquote>
<p>Yes, uploading a document works because it uses RAG behind the scenes. However, pasting the contents directly into the prompt would be inefficient.</p>
<h1 id="heading-how-does-rag-work">How does RAG work ?</h1>
<p>Lets now understand how RAG works step by step</p>
<h2 id="heading-step-1-chunking">Step 1 - Chunking</h2>
<blockquote>
<p>Chunking refers to the process of splitting large amounts of data into smaller portions</p>
</blockquote>
<p>How to chunk a document is a complex process. There isn't a universal size for a chunk. It's up to the developers to experiment with different methods to determine how they should chunk the data.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746102844295/1f4af688-9b12-45e9-b589-14fc02c6dc2c.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-2-indexing">Step 2 - Indexing</h2>
<blockquote>
<p>Indexing is the process of storing this chunks in a way that allows us to retrieve them efficiently</p>
</blockquote>
<p>Suppose you wanted to know if a big PDF contained a poem written by Shakespeare. Let's assume that the poem and author details are stored in a specific chunk. If we could retrieve just that chunk, we could easily answer your query by providing the data from that chunk to the LLM. Of course, it would then confirm it.</p>
<p>Indexing achieves this by first converting all the chunks into vector embeddings and then storing them in a vector database.</p>
<blockquote>
<p><strong>Vector embeddings</strong> turn things like text or images into numbers, so that similar things are close together on a graph. For example, the words <em>king</em> and <em>queen</em> will be close in this space because they have similar meanings. This helps computers understand relationships between words, images, or other data.</p>
</blockquote>
<p>How do we convert them into vector embeddings?</p>
<p>Many AI companies that own an LLM have their own vector embeddings. We can use any of these to convert the data into vector embeddings. Some are proprietary, like OpenAI, while others are open source, such as those from <a target="_blank" href="https://huggingface.co/sentence-transformers">Hugging Face</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746105829190/7854c146-1b04-4765-98fd-5d2d432c4558.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-3-storing">Step 3 - Storing</h2>
<p>We now need to store these sets of embeddings in a database. It's important to note that we can't just use our regular No-SQL or SQL databases. We need a different type of database that is efficient at storing these embeddings, allowing us to perform similarity searches and other operations.</p>
<blockquote>
<p>A database that stores vectors and allows us to perform related operations is called a <strong>Vector Database</strong></p>
</blockquote>
<p>Here is a list of popular Vector databases</p>
<ul>
<li><p>Pinecone</p>
</li>
<li><p>Qdrant</p>
</li>
<li><p>pgvector</p>
</li>
<li><p>Milvas</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746106039351/5c504067-b115-4be9-8973-922786aff5ea.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-4-embedding-of-user-query">Step 4 - Embedding of User query</h2>
<p>A user always has a question or wants information from this document. To find the relevant parts, we need to perform a similarity search. Therefore, we first need to convert the user's query into embeddings.</p>
<blockquote>
<p>Note: We will need to use the same embedding model</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746106675484/c2525696-34a0-4657-8746-fa2335ab0df6.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-5-retrieval-of-relevant-chunks">Step 5 - Retrieval of relevant chunks</h2>
<p>Once the user's query is converted into embeddings, we perform a similarity search using these embeddings. This returns embeddings along with data that contains similar information.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746107446864/1c03440d-9266-4103-b134-3da2f861f509.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-5-passing-the-retrieved-data-and-the-original-user-query-to-llm">Step 5 - Passing the retrieved data and the original user query to LLM.</h2>
<p>Now we have the relevant chunk, which hopefully contains a poem by Shakespeare and his name. We will send this data as context to the LLM, along with the original user query. The LLM should then respond by confirming and displaying the poem.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746108310684/acdca956-8c97-4f05-af54-f7427a2a84e7.png" alt class="image--center mx-auto" /></p>
<p>And that’s how a simple Retrieval-Augmented Generation (RAG) system operates behind the scenes. By now, you should have a solid understanding of the fundamental components and processes involved in constructing a basic RAG-based application. This includes everything from embedding user queries to performing similarity searches and leveraging vector databases to retrieve relevant information. In my next article, we will dive deeper into this topic and work together to build a practical RAG application step by step. This hands-on approach will help solidify your understanding and give you the confidence to implement RAG systems in real-world scenarios.</p>
<h2 id="heading-thankyou">Thankyou,</h2>
<h3 id="heading-my-socials">My Socials</h3>
<ul>
<li><p>X - <a target="_blank" href="https://x.com/06_Shreehari">https://x.com/06_Shreehari</a></p>
</li>
<li><p>LinkedIn - <a target="_blank" href="https://www.linkedin.com/in/shreehari-acharya/">https://www.linkedin.com/in/shreehari-acharya/</a></p>
</li>
<li><p>GitHub - <a target="_blank" href="https://github.com/Shreehari-Acharya">https://github.com/Shreehari-Acharya</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Model Context Protocol - Explained]]></title><description><![CDATA[Lets start by breaking down each word
Model : Its referring to the Large Language Model (LLM)
Context : Explaining what and how, which is required for the model to produce great answers
Protocol : Just a set of rule on how to do something.

So MCP is...]]></description><link>https://blog.shreehari.dev/model-context-protocol-explained</link><guid isPermaLink="true">https://blog.shreehari.dev/model-context-protocol-explained</guid><category><![CDATA[Model Context Protocol]]></category><category><![CDATA[mcp]]></category><category><![CDATA[AI]]></category><category><![CDATA[ai agents]]></category><dc:creator><![CDATA[Shreehari Acharya]]></dc:creator><pubDate>Sat, 26 Apr 2025 10:19:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745652797009/0929bb99-323a-4070-8168-07d55f773aae.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-lets-start-by-breaking-down-each-word">Lets start by breaking down each word</h2>
<p><strong>Model</strong> : Its referring to the Large Language Model (LLM)</p>
<p><strong>Context</strong> : Explaining what and how, which is required for the model to produce great answers</p>
<p><strong>Protocol</strong> : Just a set of rule on how to do something.</p>
<blockquote>
<p>So MCP is a set of rules that tells the Large language model <strong>what</strong> and <strong>how</strong> to use certain tools.</p>
</blockquote>
<h2 id="heading-understanding-a-tool">Understanding a Tool</h2>
<p>When we write a program, we create functions, and then need to <strong>call the function</strong> explicitly to run it.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> datetime <span class="hljs-keyword">import</span> date

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_current_date</span>():</span> <span class="hljs-comment"># A simple function that returns the current date</span>
    <span class="hljs-keyword">return</span> date.today()

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    print(<span class="hljs-string">"Today's date is:"</span>, get_current_date()) <span class="hljs-comment"># calling the function</span>
</code></pre>
<h3 id="heading-what-happens-when-we-ask-the-current-datetime-to-chatgpt">What happens, when we ask the current date/time to chatGPT?</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745654794996/51edb76c-512d-4b46-bd7f-eda036759578.png" alt class="image--center mx-auto" /></p>
<p>See.. It could not give us the current time because it does not have access to real-time data. But our small program does have a function which can give me the date.</p>
<blockquote>
<p>What if we could somehow give chatGPT the ability to run our custom function?</p>
</blockquote>
<p>That is what a Tool is! Its just another name for a particular function</p>
<h2 id="heading-mcp-client-amp-mcp-server">MCP Client &amp; MCP Server</h2>
<p>Let us now understand the difference between MCP Client and MCP Server</p>
<h3 id="heading-mcp-client">MCP Client</h3>
<p>The browser knows how to interact with a web-server. It knows the HTTP and thus can talk to any web server. Similarly a MCP Client knows how to talk to a MCP Server. Some of the popular applications have MCP Client. Cluade desktop, Cursor, Windsurf are among the popular ones.</p>
<h3 id="heading-mcp-server">MCP Server</h3>
<p>The MCP server contains these Tools which LLM can call to execute certain actions. It also describes the tool purpose, how it is used, what parameters does it take, and what does it return on successful execution.</p>
<h2 id="heading-final-flow">Final flow</h2>
<p>Let's suppose we have an MCP Server with just one tool called current-date, which calls our defined function <code>get_current_date()</code>.</p>
<p>The first step is to communicate with the MCP Server to get all the tools, their description, the parameters they take, and what they return upon successful execution. We do this using the MCP <strong>Client.</strong> The next step is to take the user's query and pass it along with the list of tools and their descriptions. Now, the LLM can call the tools if it needs specific information that a tool can provide, using the MCP Client to access real-time data!</p>
<h2 id="heading-diagram-explaining-the-entire-process">Diagram explaining the entire process</h2>
<h3 id="heading-step-1">Step 1</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745659490025/0102f908-f21d-4c5c-a5b3-50ab0da85d7d.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2">step 2</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745660246896/0f6a5dcd-2a7b-432d-bdef-9c13d312248d.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-3">Step 3</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745660556273/b0fd333e-e429-4e65-865f-3c1fc12648d1.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-4">Step 4</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745660797758/6eaac9ff-b234-410a-91a0-1379ef5dbdaf.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-5">Step 5</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745661017001/adf277a7-f3da-420c-8ae6-c7a2daf1e178.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-final-step-chatgpt-gives-back-the-final-output-to-the-user">Final step: ChatGPT gives back the final output to the user</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745661162011/27756b87-5b63-455c-90a5-71de79e8ee96.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-where-can-it-be-useful">Where can it be useful?</h2>
<p>It can be used in a lot of places. You could have a MCP-server hosting your tools, which may be functions to post something on social media, or send an email, or whatever you can imagine. Then just ask the AI which has access to your MCP server to do some task, and if needed it will call the tool and can even post a tweet on behalf of you!.</p>
<h3 id="heading-larger-use-cases">Larger use cases</h3>
<p>Many companies have already started building their own MCP servers, so that their users can interact with their products directly in human language. Here is a list of such amazing servers, that can perform some amazing tasks. <a target="_blank" href="https://github.com/modelcontextprotocol/servers">MCP-servers-list</a></p>
<h2 id="heading-further-materials">Further materials</h2>
<p>Official documentation - <a target="_blank" href="https://modelcontextprotocol.io/introduction">https://modelcontextprotocol.io/introduction</a></p>
<p>creating a MCP server to buy/sell stocks - <a target="_blank" href="https://youtu.be/1iJ34tTjwwo?si=7asblcnGFztWK7ou">https://youtu.be/1iJ34tTjwwo?si=7asblcnGFztWK7ou</a></p>
<h2 id="heading-thank-you">Thank you,</h2>
<p>X - <a target="_blank" href="https://x.com/06_Shreehari">https://x.com/06_Shreehari</a></p>
<p>LinkedIn - <a target="_blank" href="https://www.linkedin.com/in/shreehari-acharya/">https://www.linkedin.com/in/shreehari-acharya/</a></p>
<p>GitHub - <a target="_blank" href="https://github.com/Shreehari-Acharya">https://github.com/Shreehari-Acharya</a></p>
]]></content:encoded></item></channel></rss>