<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Neurotechnics]]></title><description><![CDATA[A blog about the stuff we're working on]]></description><link>https://neurotechnics.com/blog/</link><generator>Ghost 5.2</generator><lastBuildDate>Fri, 17 Apr 2026 00:25:31 GMT</lastBuildDate><atom:link href="https://neurotechnics.com/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The Ticking Time Bomb: Unpacking the Security Risks of AngularJS, Bootstrap, and jQuery]]></title><description><![CDATA[<p>Applications built on the once-ubiquitous trio of AngularJS 1.x, Bootstrap 3, and jQuery are sitting on a security precipice. With official support and security patches for these versions long since sunsetted, businesses and developers are exposed to a growing number of unpatched vulnerabilities, leaving their systems and users ripe</p>]]></description><link>https://neurotechnics.com/blog/legacy-angular-bootstrap-jquery/</link><guid isPermaLink="false">688c02aa7385b40bc0d1db91</guid><category><![CDATA[security]]></category><category><![CDATA[angular]]></category><category><![CDATA[bootstrap]]></category><category><![CDATA[jquery]]></category><category><![CDATA[react]]></category><category><![CDATA[vue]]></category><category><![CDATA[javascript]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Thu, 31 Jul 2025 23:56:58 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1509479200622-4503f27f12ef?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEyfHxoYWNrZXJ8ZW58MHx8fHwxNzUzOTYzOTE0fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1509479200622-4503f27f12ef?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEyfHxoYWNrZXJ8ZW58MHx8fHwxNzUzOTYzOTE0fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="The Ticking Time Bomb: Unpacking the Security Risks of AngularJS, Bootstrap, and jQuery"><p>Applications built on the once-ubiquitous trio of AngularJS 1.x, Bootstrap 3, and jQuery are sitting on a security precipice. With official support and security patches for these versions long since sunsetted, businesses and developers are exposed to a growing number of unpatched vulnerabilities, leaving their systems and users ripe for exploitation.</p><p>The core of the problem lies in the end-of-life (EOL) status of these foundational web development tools. AngularJS 1.x reached its EOL on <strong>December 31, 2021</strong>. Similarly, Bootstrap 3 has been unsupported since <strong>July 2019,</strong> and while jQuery is still actively developed, older versions are no longer maintained. This means that any new security flaws discovered in these legacy versions will remain unaddressed by their original creators, creating a permanent window of opportunity for attackers.</p><h3 id="angularjs-1x-a-playground-for-attackers">AngularJS 1.x: A Playground for Attackers</h3><p>AngularJS 1.x, in particular, presents a significant attack surface. Its architecture, especially the powerful and flexible nature of its two-way data binding and expression evaluation, has been a source of numerous security headaches.<br>Key vulnerabilities include:</p><ul><li><strong>Cross-Site Scripting (XSS): </strong>This is the most critical and prevalent risk in AngularJS 1.x. Attackers can inject malicious scripts into web pages viewed by other users. The framework&apos;s <em><code>$sce</code></em> (Strict Contextual Escaping) service was introduced to mitigate this, but improper use or bypassing it can easily lead to XSS. Sandbox bypass vulnerabilities have also been discovered and patched in the past, but with no new patches, any newly found bypasses will leave applications vulnerable.</li><li><strong>Template Injection: </strong>AngularJS templates are powerful, and if an attacker can control any part of a template, they can execute arbitrary JavaScript. This is a significant risk, especially in applications that dynamically generate templates based on user input.</li><li><strong>Cross-Site Request Forgery (CSRF): </strong>While AngularJS has built-in CSRF protection mechanisms, they require proper server-side implementation. Misconfiguration or a lack of server-side validation can render these protections useless.</li></ul><h3 id="bootstrap-3-outdated-and-exposed">Bootstrap 3: Outdated and Exposed</h3><p>Bootstrap 3, while primarily a CSS framework, is not immune to security vulnerabilities, especially in its JavaScript components.</p><p>The most significant risks associated with using this outdated version include:</p><ul><li><strong>Cross-Site Scripting (XSS) in JavaScript Components: </strong>Several XSS vulnerabilities have been discovered in Bootstrap 3&apos;s JavaScript plugins, such as the tooltip and popover components. These flaws allow attackers to inject malicious code through data attributes. For instance, <a href="https://nvd.nist.gov/vuln/detail/cve-2019-8331"><code>CVE-2019-8331</code></a> highlighted an XSS vulnerability in the <em><code>data-template</code></em> attribute of tooltips and popovers.</li><li><strong>Dependency on Outdated jQuery: </strong>Bootstrap 3 relies on older versions of jQuery, which themselves have a host of un-patched vulnerabilities. This creates a chain of risk, where a vulnerability in the underlying dependency can compromise the entire application.</li></ul><h3 id="jquery-a-legacy-of-vulnerabilities">jQuery: A Legacy of Vulnerabilities</h3><p>jQuery, being one of the most widely used JavaScript libraries in history, has a long list of documented vulnerabilities in its older versions. Relying on an outdated version of jQuery, as many AngularJS 1.x and Bootstrap 3 applications do, exposes a project to:</p><ul><li><strong>Cross-Site Scripting (XSS): </strong>Numerous XSS vulnerabilities have been patched in newer jQuery versions. Older versions are susceptible to attacks where malicious input can be executed as code when manipulated with certain jQuery functions. <a href="https://nvd.nist.gov/vuln/detail/cve-2020-11022"><code>CVE-2020-11022</code></a> and <a href="https://nvd.nist.gov/vuln/detail/cve-2020-11023"><code>CVE-2020-11023</code></a> are notable examples of XSS flaws in versions prior to 3.5.0.</li><li><strong>Prototype Pollution: </strong>This is a serious vulnerability where an attacker can modify the <em><code>Object.prototype</code></em>. This can lead to a variety of other security issues, including the bypass of security controls and denial of service. <a href="https://nvd.nist.gov/vuln/detail/cve-2019-11358"><code>CVE-2019-11358</code></a> is a well-known prototype pollution vulnerability in jQuery versions before 3.4.0.</li><li><strong>Denial of Service (DoS): </strong>Certain vulnerabilities in older jQuery versions could allow an attacker to crash a user&apos;s browser or cause the application to become unresponsive.</li></ul><h3 id="the-unseen-danger-lack-of-security-patches">The Unseen Danger: Lack of Security Patches</h3><p>Beyond the specific, documented vulnerabilities, the most significant risk is the absence of ongoing security support. The cybersecurity landscape is constantly evolving, with new attack techniques and vulnerabilities being discovered daily. Without a dedicated team actively monitoring and patching these legacy libraries, applications built upon them are defenceless against these emerging threats.</p><h3 id="mitigation-strategies-a-necessary-evolution">Mitigation Strategies: A Necessary Evolution</h3><p>For organizations still running applications on this outdated stack, the primary recommendation is to migrate to a modern, supported framework. Newer versions of Angular, React, or Vue.js offer more robust security features and active communities that promptly address vulnerabilities.<br>For those unable to undertake an immediate, full-scale migration, other options include:</p><ul><li><strong>Upgrading to the latest minor versions:</strong> While not a complete solution, ensuring you are on the absolute latest available point release of AngularJS 1.x, Bootstrap 3, and a more recent, secure version of jQuery can mitigate some known vulnerabilities. </li><li><strong>Extended Long-Term Support (ELTS): </strong>Several third-party vendors offer commercial ELTS for AngularJS, providing security patches for a fee. This can be a stop-gap measure to keep applications secure while planning a longer-term migration strategy. </li><li><strong>Thorough Security Audits and Penetration Testing:</strong> Regularly engaging security professionals to audit the application can help identify and mitigate vulnerabilities specific to the codebase and its dependencies.</li><li><strong>Implementing a Web Application Firewall (WAF):</strong> A WAF can provide a layer of protection by filtering and monitoring HTTP traffic between the application and the internet, potentially blocking common attacks.</li></ul><p>In conclusion, while AngularJS 1.x, Bootstrap 3, and older versions of jQuery were instrumental in shaping the modern web, their time has passed. Continuing to build and maintain applications on this unsupported foundation is a significant and unnecessary security risk. The question for businesses is not if a vulnerability will be exploited, but when. Proactive migration and modernisation are the only truly effective long - term solutions.</p><h3 id="references">References</h3><ul><li><a href="https://tuxcare.com/blog/still-rocking-the-classics-a-pragmatists-guide-to-angularjs-support-in-the-enterprise">https://tuxcare.com/blog/still-rocking-the-classics-a-pragmatists-guide-to-angularjs-support-in-the-enterprise</a></li><li><a href="https://www.herodevs.com/blog-posts/the-state-of-angularjs-in-2025">https://www.herodevs.com/blog-posts/the-state-of-angularjs-in-2025</a></li><li><a href="https://www.valencynetworks.com/kb/how-to-fix-vulnerable-jquery-javascript-library.html">https://www.valencynetworks.com/kb/how-to-fix-vulnerable-jquery-javascript-library.html</a></li><li><a href="https://blog.getbootstrap.com/2019/07/24/lts-plan">https://blog.getbootstrap.com/2019/07/24/lts-plan</a></li><li><a href="https://nvd.nist.gov/vuln/detail/cve-2019-8331">https://nvd.nist.gov/vuln/detail/cve-2019-8331</a></li><li><a href="https://nvd.nist.gov/vuln/detail/cve-2020-11022">https://nvd.nist.gov/vuln/detail/cve-2020-11022</a></li><li><a href="https://nvd.nist.gov/vuln/detail/cve-2020-11023">https://nvd.nist.gov/vuln/detail/cve-2020-11023</a></li><li><a href="https://nvd.nist.gov/vuln/detail/cve-2019-11358">https://nvd.nist.gov/vuln/detail/cve-2019-11358</a></li><li><a href="https://my.f5.com/manage/s/article/K000141463">https://my.f5.com/manage/s/article/K000141463</a></li><li><a href="https://via.studio/journal/bootstrap-3-vulnerabilityhttps://www.icesoft.org/wiki/display/ICE/jQuery+Security+Vulnerability+Mitigation">https://via.studio/journal/bootstrap-3-vulnerabilityhttps://www.icesoft.org/wiki/display/ICE/jQuery+Security+Vulnerability+Mitigation</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Resize/Maximize the default Logical Volume in Ubuntu]]></title><description><![CDATA[<p>You may have noticed that when installing Ubuntu form a USB disk, that it created a default mount point on your disk that is only about 20% of the full capacity of the disk.</p><p>Perhaps 100 or 200GB of your 1TB disk.</p><p>Using the command <code>lsblk</code> you might see your</p>]]></description><link>https://neurotechnics.com/blog/resize-the-default-lvm-partition-in-ubuntu/</link><guid isPermaLink="false">65e6a772929c1138ec82e256</guid><dc:creator><![CDATA[James]]></dc:creator><pubDate>Tue, 05 Mar 2024 08:22:37 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1709377195497-b2723c8970a9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8YWxsfDN8fHx8fHwyfHwxNzA5NjE1MDY3fA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1709377195497-b2723c8970a9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8YWxsfDN8fHx8fHwyfHwxNzA5NjE1MDY3fA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Resize/Maximize the default Logical Volume in Ubuntu"><p>You may have noticed that when installing Ubuntu form a USB disk, that it created a default mount point on your disk that is only about 20% of the full capacity of the disk.</p><p>Perhaps 100 or 200GB of your 1TB disk.</p><p>Using the command <code>lsblk</code> you might see your main partition being much larger than the logical volume mounted to it.</p><p>Below, you can see disk a partition 3 <code>sda3</code> is 553GB, but the logical volume <code>ubuntu--vg-ubuntu--lv</code> attached to root <code>/</code> is only 100GB.</p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2024/03/image.png" class="kg-image" alt="Resize/Maximize the default Logical Volume in Ubuntu" loading="lazy" width="652" height="287" srcset="https://neurotechnics.com/blog/content/images/size/w600/2024/03/image.png 600w, https://neurotechnics.com/blog/content/images/2024/03/image.png 652w"></figure><p>If you didn&apos;t notice this happen, or perhaps weren&apos;t sure it was safe to change the defaults during the guided installation process, you&apos;ll probably want to fix that now to get access to your full disk!</p><p>Using various command-line tools, you can resize your volume easily.</p><p>1.	Check the current size of the logical volume using <code>df</code> (or duf if you have it installed):<br><code>df -hT /dev/mapper/ubuntu--vg-ubuntu--lv</code></p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2024/03/image-2.png" class="kg-image" alt="Resize/Maximize the default Logical Volume in Ubuntu" loading="lazy" width="700" height="39" srcset="https://neurotechnics.com/blog/content/images/size/w600/2024/03/image-2.png 600w, https://neurotechnics.com/blog/content/images/2024/03/image-2.png 700w"></figure><p>2.	Check if you can actually perform the resize, buy testing the command first (with the <code>-t</code> option):<br><code>sudo lvresize -tvl +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv</code></p><p>3.	Resize the logical volume for real:<br><code>sudo lvresize -vl +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv</code></p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2024/03/image-1.png" class="kg-image" alt="Resize/Maximize the default Logical Volume in Ubuntu" loading="lazy" width="1090" height="211" srcset="https://neurotechnics.com/blog/content/images/size/w600/2024/03/image-1.png 600w, https://neurotechnics.com/blog/content/images/size/w1000/2024/03/image-1.png 1000w, https://neurotechnics.com/blog/content/images/2024/03/image-1.png 1090w" sizes="(min-width: 720px) 720px"></figure><p>4.	Resize the filesystem to match:<br><code>sudo resize2fs -p /dev/mapper/ubuntu--vg-ubuntu--lv</code></p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2024/03/image-3.png" class="kg-image" alt="Resize/Maximize the default Logical Volume in Ubuntu" loading="lazy" width="838" height="83" srcset="https://neurotechnics.com/blog/content/images/size/w600/2024/03/image-3.png 600w, https://neurotechnics.com/blog/content/images/2024/03/image-3.png 838w" sizes="(min-width: 720px) 720px"></figure><p>5.	Finally, check the size of the logical volume to ensure everything was successful:<br><code>df -hT /dev/mapper/ubuntu--vg-ubuntu--lv</code></p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2024/03/image-5.png" class="kg-image" alt="Resize/Maximize the default Logical Volume in Ubuntu" loading="lazy" width="666" height="42" srcset="https://neurotechnics.com/blog/content/images/size/w600/2024/03/image-5.png 600w, https://neurotechnics.com/blog/content/images/2024/03/image-5.png 666w"></figure><p>Using the <code>df</code> or <code>duf</code> command, you see an overvie of your new disk layout:</p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2024/03/image-6.png" class="kg-image" alt="Resize/Maximize the default Logical Volume in Ubuntu" loading="lazy" width="991" height="190" srcset="https://neurotechnics.com/blog/content/images/size/w600/2024/03/image-6.png 600w, https://neurotechnics.com/blog/content/images/2024/03/image-6.png 991w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[Binding port 53 to Pi-hole in docker]]></title><description><![CDATA[<p>When deploying pi-hole via docker on your Linux or Mac machine, there&apos;s a high probability that port 53 is already being used to resolve DNS requests via your local machine to an upstream server, and as such you can not bind your docker container to port 53 on</p>]]></description><link>https://neurotechnics.com/blog/pi-hole-on-docker/</link><guid isPermaLink="false">65d7d8f39183482878a4a161</guid><category><![CDATA[raspberry pi]]></category><category><![CDATA[docker]]></category><category><![CDATA[pihole]]></category><category><![CDATA[port 53]]></category><category><![CDATA[dns]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Thu, 22 Feb 2024 08:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1618440111263-e913c9f5f56d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDMxfHxyYXNwYmVycnklMjBwaXxlbnwwfHx8fDE3MDg2NTUzNzR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1618440111263-e913c9f5f56d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDMxfHxyYXNwYmVycnklMjBwaXxlbnwwfHx8fDE3MDg2NTUzNzR8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Binding port 53 to Pi-hole in docker"><p>When deploying pi-hole via docker on your Linux or Mac machine, there&apos;s a high probability that port 53 is already being used to resolve DNS requests via your local machine to an upstream server, and as such you can not bind your docker container to port 53 on the host.</p><p>You&apos;ll likely see the error:<br><code>listen tcp4 0.0.0.0:53: bind: address already in use</code></p><p>To find what is using port 53 you can execute:<br><code>sudo lsof -i -P -n | grep LISTEN</code><br>on Linux, or on Mac:<br><code>sudo lsof -i :53</code></p><p>You can be 99.9% sure that <code>systemd-resolved</code> on Linux (or <code>mDNSResponseder</code> on Mac) is what is listening to port 53.</p><p>Before diving in and disabling your local DNS resolver, you probably want to consider moving Pi-Hole to it&apos;s own VLAN instead. This will ensure it gets its own interface on your network and will make discover of clients better in the pi-hole itself.</p><p>To do that, you&apos;ll want to add a Macvlan.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.docker.com/network/drivers/macvlan/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Macvlan network driver</div><div class="kg-bookmark-description">All about using Macvlan to make your containers appear like physical machines on the network</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.docker.com/assets/favicons/docs@2x.ico" alt="Binding port 53 to Pi-hole in docker"><span class="kg-bookmark-author">Docker Documentation</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.docker.com/assets/images/thumbnail.webp" alt="Binding port 53 to Pi-hole in docker"></div></a></figure><p>To do this, you can add a new &quot;network&quot; section to your docker-compose.yml:</p><pre><code class="language-yaml">networks:
  vlan:
    enable_ipv6: true
    driver: macvlan
    driver_opts:
      parent: eno1
    ipam:
      config:
        - subnet: 192.168.1.0/24
          gateway: 192.168.1.1
          ip_range: 192.168.1.1/24 # this must not overlap with your DHCP range
        - subnet: 2001:db8:3333::/64
          gateway: 2001:db8:3333::1</code></pre><p> You can now use this new VLAN in your main pi-hole service configuration:</p><pre><code class="language-yaml">services:
  pihole:
  ...
    networks:
      vlan:
        ipv4_address: 192.168.1.10</code></pre><p>Here, you just need to specify the actual IP address you want to assign your Pi-Hole. This will now be the IP address of your new Pi-Hole DNS Server.<br>You can access the Pi-Hole admin cosole here too. e.g:<br>`http://192.168.1.10/admin`</p><h2 id="option-2">Option 2</h2><p>If you really want to disable your local DNS resolver, or you&apos;re not quite comfortable with Docker VLANs yet, you can go ahead and do that.</p><p>Be aware however, this may have other side-effects if you even remove or shut down your pi-hole container. (i.e. you&apos;ll loose the ability to resolve domains by name.)</p><p>So, to go ahead and disable it, you can use these 2 commands:</p><pre><code class="language-bash">sudo systemctl stop systemd-resolved
sudo systemctl disable systemd-resolved.service</code></pre><p>Now you have port 53 open, but no DNS configured for your host.</p><p>To fix that, you need to edit <code>/etc/resolv.conf</code> and add your upstream DNS <strong><em>nameserver</em></strong> IP address. e.g.:</p><pre><code>nameserver 1.1.1.1</code></pre><p>If you have another name-server in that file, just comment it out in case you need to remember the original value.<br>Once your pi-hole docker container is up and running, you can change the DNS server of your host to localhost, as you are binding port 53 to the host machine. Again, change <code>/etc/resolv.conf</code> like this:</p><pre><code>nameserver 127.0.0.1</code></pre><p>Have a look at the official Docker Pi-Hole documentation for more information on configuration options, running pi-hole as a DHCP server etc.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/pi-hole/docker-pi-hole/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - pi-hole/docker-pi-hole: Pi-hole in a docker container</div><div class="kg-bookmark-description">Pi-hole in a docker container. Contribute to pi-hole/docker-pi-hole development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Binding port 53 to Pi-hole in docker"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">pi-hole</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/fb973353634b51c434be18b4345340ded32476e254f8e3323e061f604d8d687a/pi-hole/docker-pi-hole" alt="Binding port 53 to Pi-hole in docker"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[My go-to command line Linux tools for WSL]]></title><description><![CDATA[<p>I&apos;ve started using WSL pretty regularly now that our development process has gone cross-platform by default. I still love developing on Windows, and even though my entire tool-chain is available on a Mac, I prefer the customisation of both hardware and software that comes with the PC platform.</p>]]></description><link>https://neurotechnics.com/blog/my-go-to-command-line-linux-tools-for-wsl/</link><guid isPermaLink="false">62cfbb5091cdac37304e48b2</guid><dc:creator><![CDATA[James]]></dc:creator><pubDate>Fri, 15 Jul 2022 08:00:23 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1629654297299-c8506221ca97?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGxpbnV4fGVufDB8fHx8MTY1Nzc4MTAwMQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1629654297299-c8506221ca97?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGxpbnV4fGVufDB8fHx8MTY1Nzc4MTAwMQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="My go-to command line Linux tools for WSL"><p>I&apos;ve started using WSL pretty regularly now that our development process has gone cross-platform by default. I still love developing on Windows, and even though my entire tool-chain is available on a Mac, I prefer the customisation of both hardware and software that comes with the PC platform.</p><p>That being said, i also <em>love</em> linux, and would use it daily if it fit into our corporate environment. So, I use WSL and al it&apos;s glory to back up my dev environment. It&apos;s pretty great and fits into my existing Windows dev environment well now.</p><p>So, here are a couple of awesome command line tools I use regularly, and how to install them.</p><p>I&apos;ll be adding to this list form time to time so come back if you&apos;d like to see if I replace these tools with anything new from time to time.</p><ol><li>nala - front-end for apt</li><li>duf - Disk Filesystem utility</li><li>btop - system resource monitor</li></ol><h1 id="1-nala">1) nala</h1><p>&quot;nala&quot; is an alternative front end to APT (libapt-pkg) that has really great interface and command line gui, including progress graphs, tables etc. It lso supports paralel downloads and filters out the guf messages that perhaps don&apos;t help all that much - especially helpful for users new to Linux and apt that don&apos;t understand the complexities and nuances of apt.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/volitank/nala"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - volitank/nala: a wrapper for the apt package manager.</div><div class="kg-bookmark-description">a wrapper for the apt package manager. Contribute to volitank/nala development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="My go-to command line Linux tools for WSL"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">volitank</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/56cba48c65682a971e0151e9f913636e967f89151dc7b89dc35573f93d28b012/volitank/nala" alt="My go-to command line Linux tools for WSL"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://gitlab.com/volian/nala"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Volian Linux / nala &#xB7; GitLab</div><div class="kg-bookmark-description">GitLab.com</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://gitlab.com/assets/favicon-72a2cad5025aa931d6ea56c3201d1f18e68a8cd39788c7c80d5b2b82aa5143ef.png" alt="My go-to command line Linux tools for WSL"><span class="kg-bookmark-author">GitLab</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://gitlab.com/uploads/-/system/project/avatar/31927362/Volian-Cat-No-Text-Dark.png" alt="My go-to command line Linux tools for WSL"></div></a></figure><h3 id="installation">Installation</h3><p>Installation is different depending on the distro and version of Linux you&apos;re running - you require Ubuntu 21+ for example, &lt;= 18 is not supported.</p><p>Head over to the <a href="https://gitlab.com/volian/nala/-/wikis/Installation">Volian WIKI</a> for installation instructions.<br>Alternatively, see the <a href="https://gitlab.com/volian/volian-archive/-/releases">Volian GitLab Releases</a> page for downloads.</p><h4></h4><h1 id="2-duf">2) duf</h1><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/muesli/duf"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - muesli/duf: Disk Usage/Free Utility - a better &#x2018;df&#x2019; alternative</div><div class="kg-bookmark-description">Disk Usage/Free Utility - a better &#x2018;df&#x2019; alternative - GitHub - muesli/duf: Disk Usage/Free Utility - a better &#x2018;df&#x2019; alternative</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="My go-to command line Linux tools for WSL"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">muesli</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://repository-images.githubusercontent.com/297165998/d7d85680-0ae5-11eb-95a5-bcbf94eb935e" alt="My go-to command line Linux tools for WSL"></div></a></figure><p>&quot;duf&quot; is a visually updated version of the &quot;df&quot; (Disk Filesystem) command and checks disk usage and statistics.</p><h3 id="installation-1">Installation</h3><p>If you&apos;re using Ubuntu (or any Debian based distro) you can install <em>duf </em>by downloading the .deb package directly from GitHub using <code>wget</code>:</p><p>* Be sure to check for the latest version in the <a href="https://github.com/muesli/duf/releases">releases page</a> first - just replace the URL below with the latest.</p><pre><code class="language-sh">wget https://github.com/muesli/duf/releases/download/v0.8.1/duf_0.8.1_linux_amd64.deb</code></pre><p>Then, install the package with <code>dpkg</code>:</p><pre><code class="language-bash">sudo dpkg -i duf_0.8.1_linux_amd64.deb</code></pre><p>If you&apos;d like to remove the downloaded package after successful install, simply run:<br><code>rm -rf duf_0.8.1_linux_amd64.deb</code></p><h1 id="3-btop">3) btop++</h1><p>You&apos;ll probably know <code>top</code> as a useful system resource monitoring tool for the command line in Linux.</p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2022/07/top--Custom-.png" class="kg-image" alt="My go-to command line Linux tools for WSL" loading="lazy" width="600" height="256" srcset="https://neurotechnics.com/blog/content/images/2022/07/top--Custom-.png 600w"></figure><p>You may even use <code>htop</code> </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://neurotechnics.com/blog/content/images/2022/07/htop--Custom-.png" class="kg-image" alt="My go-to command line Linux tools for WSL" loading="lazy" width="600" height="303" srcset="https://neurotechnics.com/blog/content/images/2022/07/htop--Custom-.png 600w"><figcaption>htop</figcaption></figure><p>Check out the btop++ repository on GitHub:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/aristocratos/btop"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - aristocratos/btop: A monitor of resources</div><div class="kg-bookmark-description">A monitor of resources. Contribute to aristocratos/btop development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="My go-to command line Linux tools for WSL"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">aristocratos</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://repository-images.githubusercontent.com/365005377/83a11b00-90f4-4b9b-a658-0ce7eb88e67a" alt="My go-to command line Linux tools for WSL"></div></a></figure><h3 id="installation-2">Installation</h3><p>To install btop++ from the command line, simply download the latest version from the GitHub repo (be sure to check for the latest release from the <a href="https://github.com/aristocratos/btop/releases">release page</a>).</p><p>You want the <code>_x64_64_linux_musl</code> version...</p><pre><code class="language-bash">wget https://github.com/aristocratos/btop/releases/download/v1.2.8/btop-x86_64-linux-musl.tbz</code></pre><p>Then, unzip the <code>bin/btop</code> folder from the archive to a new folder under `/usr/local/bin:</p><pre><code class="language-bash">sudo tar xf btop-x86_64-linux-musl.tbz -C /usr/local bin/btop</code></pre><p>You can now simply run <code>btop</code> to see your new system monitor. Remember it&apos;s <code>ctrl+c</code> to exit.</p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2022/07/btop--.png" class="kg-image" alt="My go-to command line Linux tools for WSL" loading="lazy" width="1157" height="581" srcset="https://neurotechnics.com/blog/content/images/size/w600/2022/07/btop--.png 600w, https://neurotechnics.com/blog/content/images/size/w1000/2022/07/btop--.png 1000w, https://neurotechnics.com/blog/content/images/2022/07/btop--.png 1157w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[Obtaining your public IP Address from the command line]]></title><description><![CDATA[Learn how to retrieve your public facing IP Address form the command-line for use in automation and scripting.]]></description><link>https://neurotechnics.com/blog/automate-obtaining-your-public-ip-address-from-the-command-line/</link><guid isPermaLink="false">621fed550d7f34290881ceb5</guid><category><![CDATA[nslookup]]></category><category><![CDATA[ipaddress]]></category><category><![CDATA[ipconfig]]></category><category><![CDATA[public ip]]></category><category><![CDATA[command-line]]></category><category><![CDATA[ifconfig]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Thu, 03 Mar 2022 00:02:09 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2022/03/ip-address-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://neurotechnics.com/blog/content/images/2022/03/ip-address-2.jpg" alt="Obtaining your public IP Address from the command line"><p>Most command line users, even the newbies, will have see the use of <code>ipconfig</code> for reading details of network interfeaces and their respective configuration. However, the configuration of most devices only applies to the connection to the internal network. If you need to know the IP Address of your external (public facing) internet gateway, you&apos;ll need to do a little more work.</p><p>Sure there are plenty of webistes that can give you this information (even our website has a tool: <a href="https://neurotechnics.com/tools/ipinfo">https://neurotechnics.com/tools/ipinfo</a>). And searching google for the phrase <code>what is my ip address</code> will give you the answer as a google information card (along with about 1.8 trillion other results).</p><h3 id="dos-bash-windows-and-linux">DOS / Bash (Windows and Linux)</h3><p>From the Command Prompt (or within a batch/cmd script) you can use the built-in <code>nslookup</code> command:</p><pre><code class="language-dos">nslookup myip.opendns.com resolver1.opendns.com</code></pre><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2022/03/Screenshot-2022-03-03-101358.jpg" class="kg-image" alt="Obtaining your public IP Address from the command line" loading="lazy" width="483" height="149"></figure><p>However, this is a little more verbose than need in an automation context.<br>To simplify, we probably need to parse the output, which in abatch file can be tedious. So, lets use CURL to call a remote API. (** Yes, <code>curl</code> should work out of the box in recent versions of windows).</p><p>We can talk to various web-api&apos;s for this... <code>api.ipify.org</code>, <code>ipinfo.io</code> and <code>ifconfig.me</code> are the two I use interchangeably.</p><pre><code class="language-dos">C:\&gt; curl https://ipinfo.io/ip
123.123.123.123

C:\&gt;curl https://ifconfig.me/ip
123.123.123.123</code></pre><p>Update: <code>api.ipify.org</code> DOES NOT have any tracking whatsoever (other than resolving your IP address). I highly recommend this service:</p><pre><code class="language-dos">c:\&gt; curl https://api.ipify.org
123.123.123.123</code></pre><h3 id="powershell-windows-and-linux">PowerShell (Windows and Linux)</h3><p>If you prefer using PowerShell, you can use <code>Invoke-WebRequest</code> to call one of the many remote websites or API&apos;s that will give you the information you need:</p><pre><code class="language-powershell">PS&gt; Invoke-WebRequest ifconfig.me/ip</code></pre><p>The <code>Invoke-WebRequest</code> command will return a formatted PowerShell output containing a summary of the http-response from the server.</p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2022/03/Screenshot-2022-03-03-095554.jpg" class="kg-image" alt="Obtaining your public IP Address from the command line" loading="lazy" width="623" height="397" srcset="https://neurotechnics.com/blog/content/images/size/w600/2022/03/Screenshot-2022-03-03-095554.jpg 600w, https://neurotechnics.com/blog/content/images/2022/03/Screenshot-2022-03-03-095554.jpg 623w"></figure><p>The <code>Content</code> property contains the information we&apos;re interested in. You can use the response data as is, or you can retrieve the <code>Content</code> property only, by specifying the command line:</p><pre><code class="language-powershell">PS C:\&gt; (Invoke-WebRequest ifconfig.me/ip).Content
123.123.123.123
PS C:\&gt;</code></pre>]]></content:encoded></item><item><title><![CDATA[Configure GPG to sign Git commits (in Windows)]]></title><description><![CDATA[<p>Configuring GPG to sign Git commits isn&apos;t trivial, especially if you need integration with an IDE such as VSCode or SourceTree.</p><p>Fortunately there&apos;s a straight forward set of steps you can take.</p><h2 id="install-required-software">Install required software</h2><p>You can skip any steps you&apos;ve already completed, but</p>]]></description><link>https://neurotechnics.com/blog/configure-gpg-to-sign-git-commits-in-windows/</link><guid isPermaLink="false">6053043b7a23b74fe0eba5b4</guid><category><![CDATA[git]]></category><category><![CDATA[gpg]]></category><category><![CDATA[github]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Thu, 11 Feb 2021 22:21:02 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2021/02/Screenshot_2021-02-11-DWS-iSolutions-CORE_Skewed-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://neurotechnics.com/blog/content/images/2021/02/Screenshot_2021-02-11-DWS-iSolutions-CORE_Skewed-1.png" alt="Configure GPG to sign Git commits (in Windows)"><p>Configuring GPG to sign Git commits isn&apos;t trivial, especially if you need integration with an IDE such as VSCode or SourceTree.</p><p>Fortunately there&apos;s a straight forward set of steps you can take.</p><h2 id="install-required-software">Install required software</h2><p>You can skip any steps you&apos;ve already completed, but in general you&apos;ll need to install the following:</p><ul><li>Git - <a href="https://git-scm.com/download/win">https://git-scm.com/download/win</a></li><li>GnuPG (GPG4Win) - <a href="https://gpg4win.org/download.html">https://gpg4win.org/download.html</a></li></ul><h2 id="generate-a-new-key">Generate a new key</h2><p>If you already have a PGP/GPG key you&apos;d like to use, you can skip this step, if not follow the instructions here about generating a new GPG Key:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.github.com/en/github/authenticating-to-github/generating-a-new-gpg-key"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Generating a new GPG key - GitHub Docs</div><div class="kg-bookmark-description">If you don&#x2019;t have an existing GPG key, you can generate a new GPG key to use for signing commits and tags.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.github.com/assets/images/site/favicon.svg" alt="Configure GPG to sign Git commits (in Windows)"><span class="kg-bookmark-author">GitHub Docs</span></div></div></a></figure><h2 id="backup-backup-and-re-backup">Backup, backup and re-backup</h2><p>For most use cases, <strong>the secret key</strong> need not be exported and <strong>should not distributed</strong>. &#xA0;In order to create a backup key, use the export-backup option:</p><pre><code class="language-bash">$ gpg --output backupkeys.pgp --armor --export-secret-keys --export-options export-backup jane.smith@email.com </code></pre><p>This will export all necessary information to restore the secrets &#xA0;keys including the trust database information. &#xA0;Make sure you store your backup secret keys in a secure physical location.</p><p>If this key is important to you, I recommend printing out the key on paper using <a href="https://www.jabberwocky.com/software/paperkey/" rel="noreferrer">paperkey</a>, then optionally laminate it, and place the paper key in a fireproof/waterproof safe.</p><h2 id="export-your-public-key">Export your public key</h2><p>To export your public key, you&apos;ll first need to figure out your keys ID:<br><code>gpg --list-secure-keys --key-id-format LONG</code></p><pre><code class="language-bash">$ gpg --list-secret-keys --key-id-format LONG
# /c/Users/jsmith/.gnupg/secring.gpg
# ----------------------------------
# sec   rsa4096/DA03396D49F620F3 2021-02-11 [SC] [expires: 2023-02-11]
#       802DB710FC9A7C5E4D9A6FA7DA03396D49F620F3
# uid                 [ultimate] Jane Smith &lt;jane.smith@email.com&gt;
</code></pre><p>Although you can use the full Thumbprint of the key (802DB710FC9A7C5E4D9A6FA7DA03396D49F620F3) you can also simply use the shorter Key ID (DA03396D49F620F3).</p><p>Next you&apos;ll need to dump the public key:</p><pre><code class="language-bash">$ gpg --armor --export DA03396D49F620F3
# Prints the GPG key ID, in ASCII armor format, e.g.:

-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGAk90sBEADC3l4nHixxcQ7XDbfWPs799uS92H25Z5g6SOJT6grqi+jRz425
gw4WBuZtRTrwz0zglpXkaVuCZMQT1vMrzxUbK1t/nv+SQyh0URr+wYFFCJcvpoce
dS00bQ9urOwaqXnIRuH2JnMUbTPPhp3BQJdrOFp46Itc1bURFd1ZRqoLgD/OItuu
nxP8Ncekdy1BdJJM94Mag+wEYoPfa8Q3qSeYU7zMr62KW+TGN8zNkBjEgWbq9nRT
dvm0GoqgVyBgWwffbPEgNHnFuDaCPWvkQ3bUr6vpGQewG2J8TsHYtRnOP5pkuX7A
wLcMq8zzKg79b53jpXHCVmdr91QsH5z4Ah630fjZTprFtfWzQrwolvRlH4v92vx7
PRRttasuQrjNjXY6BN9T4pQyKiepBMdTKhElLHVTZiiELxhoDBvRLznTqEelBflM
ZagoQLbXTE1F6C+l6/S8SCp1qJlBym6fVQG1tn91rxcC+RJqC99TBRYeL5xiNfp9
foZ4I0b8dg0UzsxC5cIHLxYThE3BKPR9y42BrLcG3zAxBrE3APWVJWPPN3epmaCZ
BIZMagCi4NaV6yxmT86Q37ByKiKm7Q+C1eYivGHHY/n01rvAr5NFkFVMbCBLyjaK
9ORaK9wFIMAcAvD6G3ls+yG+wMO6HKd1zWVR9GOGwCFRPAKv6XO78PZsSQARAQAB
tCBKYW1lcyBQcmljZSA8anByaWNlMjFAZ21haWwuY29tPokCVAQTAQgAPhYhBIAt
txD8mnxeTZpvp9oDOW1J9iDzBQJgJPdLAhsDBQkDwmcABQsJCAcCBhUKCQgLAgQW
AgMBAh4BAheAAAoJENoDOW1J9iDz7jQP/itHQ/Lg0t0fh9GkoV9YB2gR/Ap7tDUo
Bzt9loP+qCBHT00oo/fCDYxC3qJBpvjm7A0WpRzNEvWmwnEKAZMIRrDKVY2dU/Pq
tXuwhubvCwE0hFAnkIE88tZnbmRDYcwc9o2cqEYqakDjpupKZB2FhnV2qs8+yip6
1p2vJPi+ZtDUp8H12iqvfEBgLPAUi3NLyn2vk/koi7o4ir3Pd7o7MALaCujK6XO1
brEXVSeINGojejms+nXvbGFti/tYY1xVDmbOyA/CoJ+/Zx5Bu5yVlzpJJIZEaMwW
+GGrXS4ldZziP6F5VbKBapvDWuocK0m6qGUhVr76NND6BDJHWb4DdIYc59XEr2Pg
lJHcl1WBIR+xrtuzrKmJ+O2Fliq9NqSpT3UY7SwtZk6FEV6abbkIsFl1S8IGabf+
nMLbFce0TQ7qIjcfJEE7YyXWSNs1pHRPGNGMb+DGlhwvZZzLMu/XEoeNMErhXn1C
0EcNld2gyI9fjqlZw+GMFANFoRrhtIqs7b8jSunZs67s+SAah9IJMxiZMCMBh342
xskXrlKJvfcyosUBWZVlyRn149YsPbAAxPqGTLVFd1F/KaG2Bw6p8wZCXt+4jCSp
3fTa3q9PLswGsyT4XxWEdUVURMiT0qKW2J0DzXOYWr9EBwcayQBaALlMRNJe8+oT
vec6OdfFMLDh
=+nnP
-----END PGP PUBLIC KEY BLOCK-----
</code></pre><p>Copy this whole output (including the <code>-----BEGIN PGP PUBLIC KEY BLOCK-----</code> and <code>-----END PGP PUBLIC KEY BLOCK-----</code>, and everything in between). This is your public key... You need to register this key with GitHub.</p><h2 id="register-you-keys-with-github">Register you keys with GitHub</h2><p>Open <a href="https://github.com/settings/keys">https://github.com/settings/keys</a><br>then click &quot;<em>New GPG key</em>&quot;, paste your public key and click &quot;<em>Add GPG key</em>&quot;</p><p>(full instructions how to add GPG keys to GitHub in the link below):</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.github.com/en/github/authenticating-to-github/adding-a-new-gpg-key-to-your-github-account"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Adding a new GPG key to your GitHub account - GitHub Docs</div><div class="kg-bookmark-description">To configure your GitHub account to use your new (or existing) GPG key, you&#x2019;ll also need to add it to your GitHub account.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.github.com/assets/images/site/favicon.svg" alt="Configure GPG to sign Git commits (in Windows)"><span class="kg-bookmark-author">GitHub Docs</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.github.com/assets/images/help/settings/userbar-account-settings.png" alt="Configure GPG to sign Git commits (in Windows)"></div></a></figure><h2 id="tell-git-to-sign-your-commits-">Tell git to sign your commits!</h2><p>This next command will instruct git to automatically sign <em>all</em> commits. It modified your global <code>.gitconfig</code> file. If you&apos;d like to automatically sign commits to <em>only</em> the current repository, simply remove the <code>--global</code> from the commands below.</p><pre><code class="language-bash">$ git config --global user.signingkey DA03396D49F620F3
$ git config --global commit.gpgsign true</code></pre><p>Then, you have to tell git which GPG program you want it to use:</p><pre><code class="language-bash">$ git config --global gpg.program &quot;/c/Program Files (x86)/GnuPG/bin/gpg.exe&quot;</code></pre><p><strong><em>Optionally</em></strong>, you may need to disable TTY if your IDE doesn&apos;t like talking to gpg properly:</p><pre><code class="language-bash">$ echo &apos;no-tty&apos; &gt;&gt; ~/.gnupg/gpg.conf</code></pre><p>If you&apos;re using SourceTree, ensure that it&apos;s set to use the system git (that you just installed)</p><h2 id="are-we-done-yet">Are we done yet?</h2><p>If everything&apos;s configured properly, the next time you commit anything in your git IDE, you should be asked &#xA0;to provide the password for th eGPG key you&apos;re signing with:</p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2021/02/Screen-Shot-2021-02-12-at-9.55.12-am.png" class="kg-image" alt="Configure GPG to sign Git commits (in Windows)" loading="lazy"></figure><p>If you do, you&apos;re done. If not, have a look through the references below. You might be doing something different to me, or require extra steps for your IDE or version of git / gpg.</p><h2 id="references">References</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.github.com/en/github/authenticating-to-github/managing-commit-signature-verification"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Managing commit signature verification - GitHub Docs</div><div class="kg-bookmark-description">You can sign your work locally using GPG or S/MIME. GitHub will verify these signatures so other people will know that your commits come from a trusted source. GitHub will automatically sign commits you make using the GitHub web interface.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.github.com/assets/images/site/favicon.svg" alt="Configure GPG to sign Git commits (in Windows)"><span class="kg-bookmark-author">GitHub Docs</span></div></div></a></figure><p>If you&apos;re using SourceTree on a Mac, you don&apos;t need to go to these lengths at all. See:</p><p><a href="https://confluence.atlassian.com/sourcetreekb/setup-gpg-to-sign-commits-within-sourcetree-765397791.html">https://confluence.atlassian.com/sourcetreekb/setup-gpg-to-sign-commits-within-sourcetree-765397791.html</a></p>]]></content:encoded></item><item><title><![CDATA[Gracefully terminate a threaded C# console application on CRTL+C]]></title><description><![CDATA[<p>So, most console applications seemingly terminate instantly when they receive a <code>CTRL+C</code>, but occasionally you may notice that some have a termination message, or take an unusually long time to terminate. This is probably due to the application winding itself up cleanly without corrupting anything that was <em>in-progress</em>.</p><p>This</p>]]></description><link>https://neurotechnics.com/blog/gracefully-terminate-a-threaded-csharp-console-application/</link><guid isPermaLink="false">6053043b7a23b74fe0eba5b3</guid><category><![CDATA[console]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Mon, 25 Jan 2021 08:32:54 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2021/01/carlos-perez-NPWvH40Cn4I-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://neurotechnics.com/blog/content/images/2021/01/carlos-perez-NPWvH40Cn4I-unsplash.jpg" alt="Gracefully terminate a threaded C# console application on CRTL+C"><p>So, most console applications seemingly terminate instantly when they receive a <code>CTRL+C</code>, but occasionally you may notice that some have a termination message, or take an unusually long time to terminate. This is probably due to the application winding itself up cleanly without corrupting anything that was <em>in-progress</em>.</p><p>This becomes especially important you&apos;re manipulating the file-system, talking to an external system, or dealing with threads that need to finish before you can just arbitrarily exit your process.</p><p>So, first of all, <em>how</em> do you detect a <code>CTRL+C</code>?<br>Using a delegate function to attach a handler to the <code>Console.CancelKeyPress</code> event. You can do this in a few ways, but the two most common are inline or defining an actual function.</p><p></p><p>Inline:</p><pre><code class="language-csharp">class Program
{
    // cancelled: Used for determining if a cancel has been requested
    // &quot;volatile&quot; ensures concurrent threads get the latest value of an object
    private static volatile bool cancelled = false;

    public static void Main()
    {
        Console.CancelKeyPress += delegate (object sender, ConsoleCancelEventArgs args) {
            args.Cancel = true;
            Program.cancelled = true;
            Console.Log(&quot;CANCEL command received! Cleaning up. please wait...&quot;);
        };
    }
}</code></pre><p></p><p>or, as a function:</p><pre><code class="language-csharp">class Program
{
    // cancelled: Used for determining if a cancel has been requested
    private static volatile bool cancelled = false;

    public static void Main()
    {
        Console.CancelKeyPress += new ConsoleCancelEventHandler(myHandler);
    }

    protected static void myHandler(object sender, ConsoleCancelEventArgs args)
    {
        args.Cancel = true;
        Program.cancelled = true;
        Console.Log(&quot;CANCEL command received! Cleaning up. please wait...&quot;);
    }
}</code></pre><p>That&apos;s it. That&apos;s the handler that set&apos;s you up for gracefully handling program termination.</p><p>Once the handler has been defined, you can read the user defined <code>Program.cancelled</code> boolean property at any time to check if the user has requested termination of the running process.</p><p>For example, in your user-defined &quot;Run&quot; function you might do this:</p><pre><code class="language-csharp">private static void Run()
{
    var data = ReadSomeDataData(); // takes a few seconds to complete
    
    if (Program.cancelled) return; // Check if we received CRTL+C
    
    ProcessSomeData(data); // Might take several hours to complete.
}</code></pre><p>The Run function simply loads some data, and processes it, however if the user tries to terminate using <code>CTRL+C</code> the process is halted before the second half of the process begins.</p><p>The <code>ReadSomeData()</code> function might check for termination as well, however we&apos;ll get into the <code>ProcessSomeData()</code> function as it gets more complicated once we begin <em>updating</em> things.</p><pre><code class="language-csharp">/**
 * A long running process that spawns multiple threads.
 */
private static void ProcessSomeData(List&lt;MyDataObject&gt; data)
{
    var maxThreadCount = Math.Max(1, Math.Min(Environment.ProcessorCount, 4)));
    var opts = new ParallelOptions {
        MaxDegreeOfParallelism = maxThreadCount
    };
    
    // For each item in the list, process as many as we can in parallel threads.
    Parallel.ForEach(data, ProcessDataItem);
}</code></pre><p>Interestingly, this function <em>does not</em> check for process termination. Instead, it&apos;s simply responsible for spawnign a Parallel threaded loop of data processing functions.</p><p>The reason it can&apos;t check for temination, is that the only function here that really does anything is the &quot;ForEach&quot; and it doesn&apos;t have a callback. Interestingly, a Parallel.ForEach cant be &quot;broken&quot; (using <code>break</code>/`continue` etc.) you need to exit the current iteration fo the function using <code>return</code> and prevent further iterations of the ForEach loop by setting the stopped state of the <code>Parallel</code>.</p><p>You could inline the &quot;Func&lt;&gt;&quot; but for the sake of simplcity I&apos;m using a regular function.</p><p>So, inside the <code>ProcessDataItem()</code> function, which may be being called for different data items simultaneously in multiple threads, we need to check</p><pre><code class="language-csharp">private static void ProcessDataItem( MyDataObject item, ParallelLoopState state)
{
    // ...do some small calculations or data manipulation
    var data = PreProcessItemData(item);
    
    // check for cancellation
    if (Program.cancelled)
    {
    	state.Stop();
    	return; // nothing has been written, so we can safely return
    }
    
    // Do some actual manipulation of a DB or the filesystem.
    var result = WriteData(data);
    
    // Check for cancellation here, but DO NOT return.
    // The second half of this process needs to complete
    // or the data will be out of sync.
    if (Program.cancelled)
    {
    	state.Stop();
    }
    
    // Another arbitrary data processing function
    var journalData = PostProcessResult(result);
    var success = WriteJournalData(journalData);
    
    // We&apos;re at the end of the function,
    // but, if the user has requested termination,
    // we still set the stopped state, so a new
    // loop of the ForEach does not begin after this one.
    if (Program.cancelled)
    {
    	state.Stop();
        // &quot;return;&quot; is not required here, but we may
        // add more code after this check later.
        return;
	}
}</code></pre><p>Terminating individual loops of this function will in turn end the <code>ForEach</code> parallel processing. This will in turn eventually return control to the <code>Main()</code> function and your program will end normally, safely, and hopefully without any corruption of half-processed events..</p><p>See also:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.microsoft.com/en-us/dotnet/api/system.console.cancelkeypress"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Console.CancelKeyPress Event (System)</div><div class="kg-bookmark-description">Occurs when the Control modifier key (Ctrl) and either the C console key (C) or the Break key are pressed simultaneously (Ctrl+C or Ctrl+Break).</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.microsoft.com/favicon.ico" alt="Gracefully terminate a threaded C# console application on CRTL+C"><span class="kg-bookmark-author">Microsoft Docs</span><span class="kg-bookmark-publisher">dotnet-bot</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.microsoft.com/en-us/media/logos/logo-ms-social.png" alt="Gracefully terminate a threaded C# console application on CRTL+C"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.microsoft.com/en-us/dotnet/standard/parallel-programming/how-to-write-a-simple-parallel-foreach-loop"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Write a simple parallel program using Parallel.ForEach</div><div class="kg-bookmark-description">In this article, learn how to enable data parallelism in .NET. Write a Parallel.ForEach loop over any IEnumerable or IEnumerable data source.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.microsoft.com/favicon.ico" alt="Gracefully terminate a threaded C# console application on CRTL+C"><span class="kg-bookmark-author">Microsoft Docs</span><span class="kg-bookmark-publisher">IEvangelist</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.microsoft.com/en-us/media/logos/logo-ms-social.png" alt="Gracefully terminate a threaded C# console application on CRTL+C"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[How to restore NuGet packages from the console when Visual Studio refuses]]></title><description><![CDATA[<p>On occasion, and especially if I switch branches in git, my Visual Studio projects will load without any of the dependencies they rely on (generally in the packages folder*). Telling Visual Studio to restore all packages in various ways (including a full re-build) always seems to fail... Visual Studio&apos;</p>]]></description><link>https://neurotechnics.com/blog/how-to-restore-nuget-packages-from-the-console/</link><guid isPermaLink="false">6053043b7a23b74fe0eba5b1</guid><category><![CDATA[powershell]]></category><category><![CDATA[nuget]]></category><category><![CDATA[visualstudio]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Mon, 25 Jan 2021 04:11:33 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2021/01/vsts-nuget.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://neurotechnics.com/blog/content/images/2021/01/vsts-nuget.jpg" alt="How to restore NuGet packages from the console when Visual Studio refuses"><p>On occasion, and especially if I switch branches in git, my Visual Studio projects will load without any of the dependencies they rely on (generally in the packages folder*). Telling Visual Studio to restore all packages in various ways (including a full re-build) always seems to fail... Visual Studio&apos;s package manager thinks everything is already up to date... which it clearly is not.</p><p>So, command line to the rescue. You have two options:</p><ul><li>Use <code>nuget.exe</code></li><li>Use the <em>Package Manager Console</em> in Visual Studio (recommended)</li></ul><h2 id="using-nuget">Using NuGet</h2><p>From the command line, you can run the following <code>nuget.exe</code> command for each project:</p><pre><code>nuget install packages.config
</code></pre><p>Or with <a href="https://docs.microsoft.com/en-au/nuget/tools/cli-ref-restore" rel="noreferrer">NuGet 2.7 you can restore all packages in the solution</a> using the command:</p><pre><code>nuget restore YourSolution.sln</code></pre><p>Both of these will pull down the packages.</p><p><em>Note: </em>Your project files will not be modified when running this command, so your project should already have references to any applicable NuGet packages. If this is not the case then you can use Visual Studio to install the packages.</p><p>With NuGet 2.7, and above, Visual Studio will automatically restore missing NuGet packages when you build your solution so there is no need to use NuGet.exe.</p><p>To update all packages in your solution, first restore them, and then you can either use NuGet.exe to update the packages, or you can update the packages from the Package Manager Console window within Visual Studio (see below), or finally you can use the Manage Packages dialog.</p><p>From the command line you can update packages in the solution to the latest version available from <a href="https://nuget.org">https://nuget.org</a></p><pre><code>nuget update YourSolution.sln
</code></pre><p><em>Note: </em>this will not run any PowerShell scripts in any NuGet packages.</p><h2 id="using-package-manager-console">Using Package Manager Console</h2><p>First, make sure you have the <strong>Package Manager Console</strong> open (<code>Tools</code> &gt; <code>NuGet Package Manager</code> &gt; <code>Package Manager Console</code>) and enter the following command:</p><pre><code class="language-PowerShell">Update-Package -reinstall</code></pre><pre><code>Attempting to gather dependency information for multiple packages with respect to project &apos;openid&apos;, targeting &apos;.NETFramework,Version=v4.7.2&apos;
Gathering dependency information took 1.57 min
Attempting to resolve dependencies for multiple packages.
Resolving dependency information took 0 ms
Resolving actions install multiple packages
etc...</code></pre><p>If you need to only update packages for a specific project (rather than the whole solution) just use the following:</p><pre><code class="language-PowerShell">Update-Package -reinstall -project [ProjectName]</code></pre><p>If you&apos;re using the <em>newer</em> Package Reference method of including packages in your project (as opposed to the <em>packages.config</em> file method) you probably won&apos;t suffer this problem as the solution keeps references to the packages internally.</p><p>See also:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://learn.microsoft.com/en-us/nuget/consume-packages/package-restore"><div class="kg-bookmark-content"><div class="kg-bookmark-title">NuGet Package Restore</div><div class="kg-bookmark-description">See an overview of how NuGet restores packages a project depends on, including how to disable restore and constrain versions.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://learn.microsoft.com/favicon.ico" alt="How to restore NuGet packages from the console when Visual Studio refuses"><span class="kg-bookmark-author">Microsoft Learn</span><span class="kg-bookmark-publisher">JonDouglas</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://learn.microsoft.com/en-us/media/logos/logo-ms-social.png" alt="How to restore NuGet packages from the console when Visual Studio refuses"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.microsoft.com/en-us/nuget/consume-packages/reinstalling-and-updating-packages"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Reinstalling and Updating NuGet Packages</div><div class="kg-bookmark-description">Details on when it&#x2019;s necessary to reinstall and update packages, as with broken package references in Visual Studio.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.microsoft.com/favicon.ico" alt="How to restore NuGet packages from the console when Visual Studio refuses"><span class="kg-bookmark-author">Microsoft Learn</span><span class="kg-bookmark-publisher">JonDouglas</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://learn.microsoft.com/en-us/media/logos/logo-ms-social.png" alt="How to restore NuGet packages from the console when Visual Studio refuses"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://stackoverflow.com/a/6882750/975897"><div class="kg-bookmark-content"><div class="kg-bookmark-title">How do I get NuGet to install/update all the packages in the packages.config?</div><div class="kg-bookmark-description">I have a solution with multiple projects in it. Most of the third party references are missing, yet there are packages.config file for each project. How do I get NuGet to install/update all the pa...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://cdn.sstatic.net/Sites/stackoverflow/Img/apple-touch-icon.png?v=c78bd457575a" alt="How to restore NuGet packages from the console when Visual Studio refuses"><span class="kg-bookmark-author">Stack Overflow</span><span class="kg-bookmark-publisher">Samuel Goldenbaum</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://cdn.sstatic.net/Sites/stackoverflow/Img/apple-touch-icon@2.png?v=73d79a89bded" alt="How to restore NuGet packages from the console when Visual Studio refuses"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Ad-Reveal for Chrome and Firefox]]></title><description><![CDATA[<figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2020/01/adreveal_red_256-1.png" class="kg-image" alt loading="lazy"></figure><p>The new <strong><em>Ad Reveal</em></strong> browser extension for Chrome and Firefox has been officially released.</p><p>You can find it now on the add-on stores for Firefox:<br><a href="https://addons.mozilla.org/en-US/firefox/addon/ad-reveal/">https://addons.mozilla.org/en-US/firefox/addon/ad-reveal/</a><br>and chrome:<br><a href="https://chrome.google.com/webstore/detail/ad-reveal/jlbfnlnkmapanbohikbcnlpgpjdpnndc?hl=en-GB">https://chrome.google.com/webstore/detail/ad-reveal/jlbfnlnkmapanbohikbcnlpgpjdpnndc</a></p><p>The first few versions will remain quite</p>]]></description><link>https://neurotechnics.com/blog/ad-reveal-browser-extension/</link><guid isPermaLink="false">6053043b7a23b74fe0eba5ae</guid><category><![CDATA[firefox]]></category><category><![CDATA[chrome]]></category><category><![CDATA[extension]]></category><category><![CDATA[adblocker]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Wed, 15 Jan 2020 23:08:28 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2020/02/reveal_ads.jpg" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2020/01/adreveal_red_256-1.png" class="kg-image" alt="Ad-Reveal for Chrome and Firefox" loading="lazy"></figure><img src="https://neurotechnics.com/blog/content/images/2020/02/reveal_ads.jpg" alt="Ad-Reveal for Chrome and Firefox"><p>The new <strong><em>Ad Reveal</em></strong> browser extension for Chrome and Firefox has been officially released.</p><p>You can find it now on the add-on stores for Firefox:<br><a href="https://addons.mozilla.org/en-US/firefox/addon/ad-reveal/">https://addons.mozilla.org/en-US/firefox/addon/ad-reveal/</a><br>and chrome:<br><a href="https://chrome.google.com/webstore/detail/ad-reveal/jlbfnlnkmapanbohikbcnlpgpjdpnndc?hl=en-GB">https://chrome.google.com/webstore/detail/ad-reveal/jlbfnlnkmapanbohikbcnlpgpjdpnndc</a></p><p>The first few versions will remain quite simple with hard-coded inline/disguised/native advertising post highlighting and low-lighting. Promotional posts will be marked with a red side-bar and an icon indicating Ad Reveal has detected promotional or sponsored material, and fade the content into the background slightly.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://neurotechnics.com/blog/content/images/2020/01/reddit_v0.0.2--Custom-.png" width="640" height="400" loading="lazy" alt="Ad-Reveal for Chrome and Firefox" srcset="https://neurotechnics.com/blog/content/images/size/w600/2020/01/reddit_v0.0.2--Custom-.png 600w, https://neurotechnics.com/blog/content/images/2020/01/reddit_v0.0.2--Custom-.png 640w"></div><div class="kg-gallery-image"><img src="https://neurotechnics.com/blog/content/images/2020/01/pinterest_v0.0.2--Custom-.png" width="640" height="400" loading="lazy" alt="Ad-Reveal for Chrome and Firefox" srcset="https://neurotechnics.com/blog/content/images/size/w600/2020/01/pinterest_v0.0.2--Custom-.png 600w, https://neurotechnics.com/blog/content/images/2020/01/pinterest_v0.0.2--Custom-.png 640w"></div><div class="kg-gallery-image"><img src="https://neurotechnics.com/blog/content/images/2020/01/fb_v0.0.2--Custom-.png" width="640" height="400" loading="lazy" alt="Ad-Reveal for Chrome and Firefox" srcset="https://neurotechnics.com/blog/content/images/size/w600/2020/01/fb_v0.0.2--Custom-.png 600w, https://neurotechnics.com/blog/content/images/2020/01/fb_v0.0.2--Custom-.png 640w"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://neurotechnics.com/blog/content/images/2020/02/ebay_v0.0.4.png" width="1280" height="800" loading="lazy" alt="Ad-Reveal for Chrome and Firefox" srcset="https://neurotechnics.com/blog/content/images/size/w600/2020/02/ebay_v0.0.4.png 600w, https://neurotechnics.com/blog/content/images/size/w1000/2020/02/ebay_v0.0.4.png 1000w, https://neurotechnics.com/blog/content/images/2020/02/ebay_v0.0.4.png 1280w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://neurotechnics.com/blog/content/images/2020/02/google_v0.0.4.png" width="1280" height="800" loading="lazy" alt="Ad-Reveal for Chrome and Firefox" srcset="https://neurotechnics.com/blog/content/images/size/w600/2020/02/google_v0.0.4.png 600w, https://neurotechnics.com/blog/content/images/size/w1000/2020/02/google_v0.0.4.png 1000w, https://neurotechnics.com/blog/content/images/2020/02/google_v0.0.4.png 1280w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption>Sample screenshots</figcaption></figure><p>Currently works with: <strong><em>Facebook</em></strong>, <strong><em>Twitter</em></strong>, <strong><em>Reddit</em></strong>, <strong><em>Pinterest</em></strong>, <strong><em>Ebay</em></strong>, and <strong><em>Google</em></strong> search, and there will be many more to follow.</p><p>Coming soon will be the ability to enable/disable the extension for individual domains, customize the extensions visual effect for individual domains (colors, fade effect etc.), as well as the ability for advanced users to add custom selectors to highlight their own advertisements not covered by the built-in site gallery.</p><p>Additional sites on the radar for the next version:</p><ul><li>Feedly</li><li>Product Hunt</li></ul><p>Stay Tuned.</p>]]></content:encoded></item><item><title><![CDATA[Solved: Cannot connect to an SMB2 network share form Windows Server 2019]]></title><description><![CDATA[<p>Guest access in SMB2 will be disabled by default in Windows Server 2019.</p><p>According to microsoft (see: <a href="https://support.microsoft.com/en-us/help/4046019/guest-access-in-smb2-disabled-by-default-in-windows-10-and-windows-ser">https://support.microsoft.com/en-us/help/4046019/guest-access-in-smb2-disabled-by-default-in-windows-10-and-windows-ser</a>)</p><blockquote>In Windows 10, version 1709, Windows 10, version 1903, Windows &#xA0;Server, version 1709, Windows &#xA0;Server, version 1903, and Windows Server &#xA0;2019, the</blockquote>]]></description><link>https://neurotechnics.com/blog/solved-cannot-connect-to-an-smb2-network-share-form-windows-server-2019/</link><guid isPermaLink="false">6053043b7a23b74fe0eba5ad</guid><category><![CDATA[smb]]></category><category><![CDATA[windows]]></category><category><![CDATA[server 2019]]></category><category><![CDATA[network]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Tue, 29 Oct 2019 10:40:20 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2019/10/Slide1CoverArt.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://neurotechnics.com/blog/content/images/2019/10/Slide1CoverArt.jpg" alt="Solved: Cannot connect to an SMB2 network share form Windows Server 2019"><p>Guest access in SMB2 will be disabled by default in Windows Server 2019.</p><p>According to microsoft (see: <a href="https://support.microsoft.com/en-us/help/4046019/guest-access-in-smb2-disabled-by-default-in-windows-10-and-windows-ser">https://support.microsoft.com/en-us/help/4046019/guest-access-in-smb2-disabled-by-default-in-windows-10-and-windows-ser</a>)</p><blockquote>In Windows 10, version 1709, Windows 10, version 1903, Windows &#xA0;Server, version 1709, Windows &#xA0;Server, version 1903, and Windows Server &#xA0;2019, the SMB2 client no longer allows the following actions:<br> &#xA0;* Guest account access to a remote server;<br> &#xA0;* Fallback to the Guest account after invalid credentials are provided;</blockquote><p>Resolution:</p><p>If you want to enable insecure guest access, you can configure the following Group Policy settings (Select <em>Edit Group Policy</em> from the control panel), and navigate to:</p><p>&gt; ComputerConfiguration<br>	&gt; Administrative templates<br>		&gt; Network<br>			&gt; Lanman Workstation</p><p>Here there should be an entry called <code>Enable insecure guest logons</code></p><figure class="kg-card kg-image-card"><img src="https://neurotechnics.com/blog/content/images/2019/10/Screen-Shot-2019-10-29-at-9.26.57-pm.png" class="kg-image" alt="Solved: Cannot connect to an SMB2 network share form Windows Server 2019" loading="lazy"></figure><p>Double click this settings entry to change its configuration. Change this setting from <code>Not configured</code> to <code>Enabled</code>.</p><p>That&apos;s it. You should now have access to your guest network shares.</p><p>Note: By enabling insecure guest logons, this setting reduces the security of Windows clients.</p>]]></content:encoded></item><item><title><![CDATA[How to fix: The symbolic link cannot be followed because its type is disabled]]></title><description><![CDATA[<p>Occasionally, when copying folders from remote servers, or when accessing a symbolic link directly on a file server, you may see the following error:</p><p><code>The symbolic link cannot be followed because its type is disabled</code></p><p>This is because by default <em>remote to remote symbolic links</em> are disabled. You can enable</p>]]></description><link>https://neurotechnics.com/blog/fix-symbolic-link-cannot-be-followed/</link><guid isPermaLink="false">6053043b7a23b74fe0eba5ac</guid><category><![CDATA[symlink]]></category><category><![CDATA[symbolic link]]></category><category><![CDATA[disabled]]></category><category><![CDATA[windows]]></category><category><![CDATA[fsutil]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Tue, 15 Oct 2019 23:16:17 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2019/10/links-1920.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://neurotechnics.com/blog/content/images/2019/10/links-1920.jpg" alt="How to fix: The symbolic link cannot be followed because its type is disabled"><p>Occasionally, when copying folders from remote servers, or when accessing a symbolic link directly on a file server, you may see the following error:</p><p><code>The symbolic link cannot be followed because its type is disabled</code></p><p>This is because by default <em>remote to remote symbolic links</em> are disabled. You can enable it with <strong><em>fsutil.</em></strong><br>See: <a href="https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/fsutil">https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/fsutil</a></p><p>To view the current status of the Symlink settings on your system, execute the following command from an elevated (administrator) command prompt:<br><code>fsutil behavior query SymlinkEvaluation</code></p><pre><code class="language-dos">C:\&gt;fsutil behavior query SymlinkEvaluation

Local to local symbolic links are enabled.
Local to remote symbolic links are enabled.
Remote to local symbolic links are disabled.
Remote to remote symbolic links are disabled.</code></pre><p>You&apos;ll notice the last line of the response to the above command notes that<br><code>Remote to remote symbolic links are <strong><em>disabled</em></strong></code><em>.</em></p><p><em>In order to enable remote to remote symbolic links, enter the following command:</em></p><pre><code>C:\&gt;fsutil behavior set SymlinkEvaluation R2R:1</code></pre><p>You won&apos;t see any response if the command was successful. To check that the setting has been updated, enter the evaluation query again:</p><pre><code class="language-dos">C:\&gt;fsutil behavior query SymlinkEvaluation

Local to local symbolic links are enabled.
Local to remote symbolic links are enabled.
Remote to local symbolic links are disabled.
Remote to remote symbolic links are enabled.</code></pre><p>If you also need to enable <em><strong>remote to local</strong></em> link evaluation, you can substitute R2R:1 with R2L:1 in the set behavior command.<br>(<code>fsutil behavior set SymlinkEvaluation R2L:1</code>)</p>]]></content:encoded></item><item><title><![CDATA[Linux: Failed to start Raise network interfaces]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>We use a lot of virtualization technology in our stack, particularly during development. It just makes life easier. However, there are times when problems arise purely from the fact we&apos;re testing the latest tech.</p>
<p>Most recently, we upgraded all of our VM&apos;s to the latest Linux</p>]]></description><link>https://neurotechnics.com/blog/linux-failed-to-start-raise-network-interfaces/</link><guid isPermaLink="false">6053043b7a23b74fe0eba5ab</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[network]]></category><category><![CDATA[virtualbox]]></category><category><![CDATA[bitnami]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Mon, 07 May 2018 00:05:39 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2018/11/thomas-jensen-592813-unsplash--Medium-.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://neurotechnics.com/blog/content/images/2018/11/thomas-jensen-592813-unsplash--Medium-.jpg" alt="Linux: Failed to start Raise network interfaces"><p>We use a lot of virtualization technology in our stack, particularly during development. It just makes life easier. However, there are times when problems arise purely from the fact we&apos;re testing the latest tech.</p>
<p>Most recently, we upgraded all of our VM&apos;s to the latest Linux (Some are Debian, some are Ubuntu). Immediately, we noticed that the new machines cou;dn&apos;t connect to the nwtork... They appeared to be conected properly, they even have a DHCP allocated IP Address, but they just don&apos;t have a connection.</p>
<p>Turns out the network interfaces were not being initialised properly:</p>
<pre><code>Failed to start Raise network interfaces
See &apos;systemctl status networking.service&apos; for details
</code></pre>
<p>After some research, it appears that the issue is related to the Predictable-Network-Interface-Names from systemd/udev.</p>
<p>In order to resolve the issue, we created a new file <code>10-rename-network.rules</code> in <code>/etc/udev/rules.d/</code> with a rule in it to attach a specific name to the network interface matching the MAC address of the Network Interface Card (NIC):</p>
<pre><code class="language-bash">sudo vi /etc/udev/rules.d/10-rename-network.rules
</code></pre>
<p><br>and add the following content to it:</p>
<pre><code>SUBSYSTEM==&quot;net&quot;, ACTION==&quot;add&quot;, ATTR{address}==&quot;ff:ff:ff:ff:ff:ff&quot;, NAME=&quot;eth0&quot;
</code></pre>
<p>where:<br>
<code>eth0</code> = desired network interface name, used in <code>/etc/network/interfaces</code> or <code>/etc/network/interfaces.d/setup</code>;<br>
<code>ff:ff:ff:ff:ff:ff</code> = hardware mac address of the network device;</p>
<p>Additionally, you&apos;ll need to execute the following to generate a new boot/initrd image:</p>
<pre><code class="language-bash">update-initramfs -u
</code></pre>
<p>and, reboot...</p>
<pre><code class="language-bash">sudo shutdown -r now
</code></pre>
<p>et violla, you have yourself a connected vm.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[git commit accidents - how to undo one]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>So you&apos;ve just pushed your local branch to a remote branch, but then realized that one of the commits should not be there, or that there was some unacceptable typo in it. No problem, you can fix it. But you should do it rather fast before anyone fetches</p>]]></description><link>https://neurotechnics.com/blog/git-commit-accidents-how-to-undo-one/</link><guid isPermaLink="false">6053043b7a23b74fe0eba5a4</guid><category><![CDATA[git]]></category><category><![CDATA[github]]></category><category><![CDATA[reverse]]></category><category><![CDATA[commit]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Mon, 05 Mar 2018 22:41:28 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2017/03/git.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://neurotechnics.com/blog/content/images/2017/03/git.jpg" alt="git commit accidents - how to undo one"><p>So you&apos;ve just pushed your local branch to a remote branch, but then realized that one of the commits should not be there, or that there was some unacceptable typo in it. No problem, you can fix it. But you should do it rather fast before anyone fetches the bad commits, or you won&apos;t be very popular with them for a while ;)</p>
<p>If you need to keep the history of your repository intact, you basically have two options (with a third <em>workaround</em>):</p>
<h2 id="option1updateyourrepositorywithanewcommit">Option 1: Update your repository with a new commit</h2>
<p>For a simple correction in one, or even a handful, of file, simply remove or fix the bad file(s) in a new commit and push it to the remote repository. This is the easiest (and non-destructive) way to correct an error, and should suffice in most cases. This way your original (bad) commit will remain but you will have a complete history.</p>
<h2 id="option2revertthecommit">Option 2: &quot;Revert&quot; the commit</h2>
<p>If you need to back out every single change in all files in a single commit, you can do this easily by executing a <code>revert</code>.</p>
<p>This is a good alternative to the first option above, when you just need to do a bulk undo of the previously (or at any point in the past) committed changes.</p>
<p><em>Reverting</em> a commit means to create a new commit, that undoes all changes made in the commit you are reverting.</p>
<p>As with option 1, your history will remain intact.</p>
<p>To do this simple specify the revert command, noting the hash of the commit you wish to undo. Git will handle the rest:</p>
<pre><code class="language-bash">$ git revert 7c63649
</code></pre>
<h2 id="option3rewritehistory">Option 3 - Rewrite History</h2>
<p>It should be noted that rewriting history can wreak havoc on other users if you have a very large developer base all referencing your repository. Especially if it is highly active and people rely on staying up to date... You should generally avoid history rewriting for this reason. Your new version of history cannot be pulled like would normally be the case. If other users are referencing commits in the future of your newly created view of the code-base, they will have some serious work on their hands if they need to merge changes already completed in their local repositories.</p>
<p>However, sometimes you do want to rewrite the history. Be it because of leaked sensitive information, to get rid of some very large files that should not have been there in the first place, or just because you want a clean history (I certainly do).</p>
<p>I usually also do a lot of very heavy history rewriting when converting some repository from Subversion or Mercurial over to Git, be it to enforce internal LF line endings, fixing committer names and email addresses or to completely delete some large folders from all revisions. I recently also had to rewrite a large git repository to get rid of some corruption in an early commit that started causing more and more problems.</p>
<p>Yes, you should avoid rewriting history which already passed into other forks if possible, but the world does not end if you do nevertheless. For example you can still cherry-pick commits between the histories, e.g. to fetch some pull requests on top of the old history.</p>
<p>In open-source projects, always contact the repository maintainer first before doing any history rewriting. There are maintainers that do not allow any rewriting in general and block any non-fast-forward pushes. Others prefer doing such rewritings themselves.</p>
<h3 id="case1deletethelastcommit">Case 1: Delete the last commit</h3>
<p>Deleting the last commit is the easiest case. Let&apos;s say we have a remote <strong>origin</strong> with branch master that currently points to commit <code>dd61ab32</code>. We want to remove the top commit. Translated to git terminology, we want to force the master branch of the <strong>origin</strong> remote repository to the parent of <code>dd61ab32</code> (<code>x^</code> points to the parent of <code>x</code>):</p>
<pre><code class="language-bash">$ git push origin +dd61ab32^:master
</code></pre>
<p>Where git interprets <code>hhhhhhh^</code> as the parent of the commit you wish to reverse and <code>+</code> as a forced non-fast-forward push. If you have the master branch checked out locally, you can also do it in two simpler steps: First reset the branch to the parent of the current commit, then force-push it to the remote.</p>
<pre><code class="language-bash">$ git reset HEAD^ --hard
$ git push origin -f
</code></pre>
<h3 id="case2deletethesecondlastcommit">Case 2: Delete the second last commit</h3>
<p>Let&apos;s say the bad commit <code>dd61ab32</code> is not the top commit, but a slightly older one, e.g. the second last one. We want to remove it, but keep all commits that followed it. In other words, we want to rewrite the history and force the result back to <em>origin/master</em>. The easiest way to rewrite history is to do an interactive rebase down to the parent of the offending commit:</p>
<pre><code class="language-bash">$ git rebase -i 7c63649^
</code></pre>
<p>This will open an editor and show a list of all commits since the commit we want to get rid of:</p>
<pre><code>pick dd61ab32
pick dsadhj278
...etc...
</code></pre>
<p>Simply remove the line with the offending commit, likely that will be the first line (vi: delete current line = dd). Save and close the editor (vi: press :wq and return). Resolve any conflicts if there are any, and your local branch should be fixed. Force it to the remote and you&apos;re done:</p>
<pre><code class="language-bash">$ git push origin -f
</code></pre>
<h3 id="case3fixatypoinoneofthecommits">Case 3: Fix a typo in one of the commits</h3>
<p>This works almost exactly the same way as case 2, but instead of removing the line with the bad commit, simply replace its pick with edit and save/exit. Rebase will then stop at that commit, put the changes into the index and then let you change it as you like. Commit the change and continue the rebase (git will tell you how to keep the commit message and author if you want). Then push the changes as described above. The same way you can even split commits into smaller ones, or merge commits together.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Automatic NTLM Authentication in your browser]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>If you&apos;re in an authenticated network environment, an intranet or other workplace environment where you need to authenticate using NTLM, you&apos;ve probably been frustrated by the situation where you need to enter your windows credentials a dozen or more times a day, even though you&apos;</p>]]></description><link>https://neurotechnics.com/blog/automatic-ntlm-authentication-in-your-browser/</link><guid isPermaLink="false">6053043b7a23b74fe0eba5aa</guid><category><![CDATA[authentication]]></category><category><![CDATA[ntlm]]></category><category><![CDATA[firefox]]></category><category><![CDATA[chrome]]></category><category><![CDATA[ie]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Tue, 21 Nov 2017 02:56:13 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2017/11/igor-ovsyannykov-329196-2.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://neurotechnics.com/blog/content/images/2017/11/igor-ovsyannykov-329196-2.jpg" alt="Automatic NTLM Authentication in your browser"><p>If you&apos;re in an authenticated network environment, an intranet or other workplace environment where you need to authenticate using NTLM, you&apos;ve probably been frustrated by the situation where you need to enter your windows credentials a dozen or more times a day, even though you&apos;re already logged into the network itself, in order to access resources on your corporate intranet - Webmail, time-sheets, documents, HR and probably many others. Why can&apos;t the browser just know who you are and authenticate you automatically.</p>
<p>Turns out it can.</p>
<p>Firefox, Chrome/IE do it slightly differently, but it&apos;s essentially the same process. You just need to whitelist the domain names you need to allow automatic authentication to, and let windows save your credentials.</p>
<h2 id="ieandchrome">IE (and Chrome)</h2>
<p>Internet Explorer supports Integrated Windows Authentication (IWA) out-of-the-box, but may need additional configuration due to the network or domain environment.</p>
<p>In Active Directory (AD) environments, the default authentication protocol for IWA is Kerberos, with a fall back to NTLM. Chrome uses windows settings for all of it&apos;s security policies, so when you configure IE, chrome will comply and work automatically.</p>
<p>In windows 10 you can simply hit your start button and search for &quot;Internet Options&quot; - It&apos;s a control panel menu. Alternatively, you can open Internet Explorer, and select &quot;<em>Settings</em>&quot; (the gear), &quot;<em>Internet Options</em>&quot;.<br>
From here, select either <em>Local Intranet</em> or <em>Trusted Sites</em> and click the <em><strong>Sites</strong></em> button to edit the sites options, then click <em><strong>Advanced</strong></em> to edit the list of urls for the zone.<br>
<img src="https://neurotechnics.com/blog/content/images/2017/11/ie-settings.png" alt="Automatic NTLM Authentication in your browser" loading="lazy"><br>
Then, add the domains you&apos;d like to trust for authentication to this list.</p>
<p>That&apos;s basically all you have to do. Of course, you also need to have your credentials stored by windows in order to allow automatic authentication. Normally, logging into the network will do this, however if the intranet site or proxy you&apos;re connecting to hasn&apos;t been used before, you may need to manually add the credentials to windows.</p>
<p>To do this, you simply need to open the &quot;<em><strong>Credential Manager</strong></em>&quot; (either from search, or control panel), Select the <em><strong>Windows Credentials</strong></em> option at the top and add a new credential for the domain you&apos;re connecting to. Simple.<br>
<img src="https://neurotechnics.com/blog/content/images/2017/11/credentials.png" alt="Automatic NTLM Authentication in your browser" loading="lazy"></p>
<h2 id="firefox">Firefox</h2>
<p>Firefox is (comparatively) much easier to configure. Although Firefox supports Kerberos/NTLM authentication protocols, it must be manually configured to work correctly. Firefox doesn&apos;t use the concept of security zones like IE, however it won&apos;t automatically present credentials to any host unless explicitly configured. By default, Firefox rejects all SPNEGO (Simple and Protected GSS-API Negotiation) challenges from any Web server, including the IWA Adapter. What this means is that you will be presented with a login prompt every time they visit a site that uses this authentication method, even when you are already logged into your network.</p>
<p>Firefox must be manually configured for a whitelist of sites permitted to exchange SPNEGO protocol messages with the browser.</p>
<p>To authenticate Firefox, you have to modify 3 parameters.</p>
<ol>
<li>Open a new tab and navigate to the page <code>about:config</code> (in the address bar);</li>
<li>Add your uris (separate with <code>,</code>) in the following 3 parameters:</li>
</ol>
<pre><code>network.automatic-ntlm-auth.trusted-uris
network.negotiate-auth.delegation-uris
network.negotiate-auth.trusted-uris
</code></pre>
<p>and add the URL of your intranet domain, or proxy redirection page, like<br>
<code>https://intranet,https://intranet.neurotechnics.local,https://myproxy.local</code></p>
<ol start="3">
<li>Modify <code>signon.autologin.proxy</code> to be <code>true</code></li>
</ol>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Windows Fall Creators Update (1709) and the IIS Error 503]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>We finally began the process of updating our Windows 10 desktops to version 1709 (Fall Creators Update) today, and what happened... all of our in-developement websites stopped working.<br>
Pretty much every application pool crashed as soon as we accessed each site for the first time, showing a <strong><em>503: Service Unavailable</em></strong></p>]]></description><link>https://neurotechnics.com/blog/windows-fall-creators-update-1709-iis-error-503/</link><guid isPermaLink="false">6053043b7a23b74fe0eba5a9</guid><category><![CDATA[iis]]></category><category><![CDATA[windows 10]]></category><category><![CDATA[fall creators update]]></category><category><![CDATA[503]]></category><category><![CDATA[service unavailable]]></category><category><![CDATA[was]]></category><dc:creator><![CDATA[James]]></dc:creator><pubDate>Tue, 31 Oct 2017 01:00:36 GMT</pubDate><media:content url="https://neurotechnics.com/blog/content/images/2017/10/chris-lawton-154388--Medium-.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://neurotechnics.com/blog/content/images/2017/10/chris-lawton-154388--Medium-.jpg" alt="Windows Fall Creators Update (1709) and the IIS Error 503"><p>We finally began the process of updating our Windows 10 desktops to version 1709 (Fall Creators Update) today, and what happened... all of our in-developement websites stopped working.<br>
Pretty much every application pool crashed as soon as we accessed each site for the first time, showing a <strong><em>503: Service Unavailable</em></strong> error.</p>
<p>In the windows event logs there was the following error event:</p>
<pre><code>Source: IIS-W3SVC-WP
Event ID: 2307

The worker process for application pool &apos;DefaultAppPool&apos;
encountered an error &apos;Cannot read configuration file&apos;
trying to read configuration data from file &apos;\\?\&lt;EMPTY&gt;&apos;, line number &apos;0&apos;.
The data field contains the error code.

ApplicationPool: DefaultAppPool 
ConfigException: Cannot read configuration file  
FileName: \\?\&lt;EMPTY&gt; 
LineNumber: 0 
            02000000 
</code></pre>
<p>Yup, nothing to do but google it.<br>
So, many many people are having hte same prblem.<br>
Turns out it&apos;s a known issue. (Who&apos;d have thought.)<br>
Apparently, many users are getting a <strong><em>WAS event 5189</em></strong> error instead.</p>
<p>To cut a long story short, the official resolution is to execute the following command line operations in a PowerShell windows (With admin rights - &quot;<em>Run as administrator</em>&quot;):</p>
<pre><code class="language-powershell">Stop-Service -Force WAS
Remove-Item -Recurse -Force C:\inetpub\temp\appPools\*
Start-Service W3SVC
</code></pre>
<p>For the official explanation, including cause and resolution details, see this official Microsoft Support KB article:<br>
<a href="https://support.microsoft.com/en-us/help/4050891/error-http-503-and-was-event-5189-from-web-applications-on-windows-10">https://support.microsoft.com/en-us/help/4050891/error-http-503-and-was-event-5189-from-web-applications-on-windows-10</a></p>
<hr>
<h3 id="update">Update:</h3>
<p>Thanks to <em>Bokkeman</em> for pointing out in the comments that the Powershell script has obviously had a few problems, and Microsoft now recommend simply purging the entire folder using a regular command prompt.</p>
<p>&quot;To resolve this problem, manually delete the symbolic links that are created by Windows Update. To do this, follow these steps.&quot;</p>
<p><em>Note Symbolic links can be deleted the same as regular files.</em></p>
<ol>
<li>Open a Command Prompt window by using the <code>Run as administrator</code> option.</li>
<li>Run the following commands:</li>
</ol>
<pre><code class="language-batch">net stop WAS /y
rmdir /s /q C:\inetpub\temp\appPools
net start W3SVC
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>