NoNinjas Ramblings

Linux Tux with the Utah State Flag

Following is a list of Utah-based mirrors for Linux operating systems.

I prefer to use local Linux mirrors whenever possible. Local mirrors reduce the load and bandwidth on distributions' primary servers, offer better latency and download speed, and it's just more fun to be part of the local Linux community.

I will add to this list as I find more mirrors. For now, the three biggest Linux mirrors in Utah are all located in Salt Lake City:

  1. XMission [Homepage] [Mirror Home], an independent Utah ISP that also offers domain registration, hosting, and colocation services ... and from my experience, just all-around good people
  2. University of Utah Center for High Performance Computing [Homepage] [Mirror Home], a team that provides high-performance computing resources to support research and data needs at the University of Utah
  3. University of Utah Kahlert School of Computing [Homepage] [Mirror Home], the main college of computer science at the University of Utah

I've included comments for the mirrors I use regularly. For the distributions I'm familiar with, I've included brief instructions for switching to a local mirror. I haven't used all of these mirrors and distributions of course, so I don't have comments for all of them, but lack of comments does not mean a mirror is poor or that installation is difficult. For more information, XMission maintains excellent instructions for each distro they mirror.

Most recent update: 12 November, 2024

Alma Linux logo Alma Linux – University of Utah Center for High Performance Computing: https://mirror.chpc.utah.edu/pub/almalinux/

Arch Linux logo Arch Linux [documentation] – University of Utah Center for High Performance Computing: https://mirror.chpc.utah.edu/pub/archlinux/ – XMission: https://mirrors.xmission.com/archlinux/ * *I have had trouble with the Arch Linux mirror at XMission being slow to update, and slow doesn't cut it with Arch. I use the University of Utah mirror as primary for my Arch installations.

To update your mirror list in Arch, edit /etc/pacman.d/mirrorlist. To add one or both of the Utah mirrors to your Arch mirror list, these lines may be added to the file: Server = https://mirror.chpc.utah.edu/pub/archlinux/$repo/os/$arch Server = https://mirrors.xmission.com/archlinux/$repo/os/$arch The mirrors in the file are ordered by priority starting at the top.

CentOS logo CentOS – University of Utah Center for High Performance Computing: https://mirror.chpc.utah.edu/pub/centos/ – XMission: https://mirrors.xmission.com/centos/

CentOS logo CentOS Stream – XMission: https://mirrors.xmission.com/centos-stream/

Debian GNU/Linux logo Debian GNU/Linux [documentation] – XMission: https://mirrors.xmission.com/debian/ *I personally use the XMission Debian mirror on a regular basis and it is excellent.

To update your mirror list in Debian, edit /etc/apt/sources.list To add the XMission mirror to your Debian mirror list, these lines may be added to the file for both $release and $release-updates: deb https://mirrors.xmission.com/debian/ deb-src https://mirrors.xmission.com/debian/ Make sure to include the subsequent $release main contrib ... entries on each line to match your original entry. Mirrors are prioritized in order in the sources.list file, so simply place the XMission mirrors above the Debian mirrors if you wish to keep both.

As far as I can tell, XMission does not host a “debian-security” mirror, so you cannot use XMission for debian-security entries. I've had good luck with both the UC Berkeley mirror (which is in the western US) and the MIT mirror: https://mirrors.ocf.berkeley.edu/debian-security/ https://debian.csail.mit.edu/debian-security/

Fedora Linux logo Fedora Linux – University of Utah Center for High Performance Computing: https://mirror.chpc.utah.edu/pub/fedora/ – XMission: https://mirrors.xmission.com/fedora/

Gentoo Linux logo Gentoo Linux [documentation] – University of Utah Flux Research Group: http://gentoo-mirror.flux.utah.edu/ – XMission: https://mirrors.xmission.com/gentoo/

Knoppix logo Knoppix [documentation (Debian)] – XMission: https://mirrors.xmission.com/knoppix/

Linux Mint logo Linux Mint [documentation (Ubuntu)] – XMission: https://mirrors.xmission.com/linuxmint/

MX Linux logo MX Linux [documentation (Debian)] – MX and MEPIS Research Community: http://mxrepo.com/mx/repo/

openSUSE logo openSUSE – openSUSE official Provo, Utah mirror: https://provo-mirror.opensuse.org/ * *The openSUSE Provo mirror has been down since October 28th, 2024 because it is “being moved to a new location.” The mirror could be moving outside of Utah.

Raspberry Pi OS logo Raspberry Pi OS (formerly Raspbian) [documentation (Debian)] – XMission: https://mirrors.xmission.com/raspbian/ *I personally use the Raspberry Pi OS mirror at XMission and it is solid.

Rocky Linux logo Rocky Linux – University of Utah Center for High Performance Computing: https://mirror.chpc.utah.edu/pub/rocky/

Salix OS logo Salix OS [documentation] – XMission: https://mirrors.xmission.com/salix/

Slackware Linux logo Slackware [documentation] – University of Utah School of Computing: http://slackware.cs.utah.edu/pub/slackware/ – XMission: https://mirrors.xmission.com/slackware/

Ubuntu logo Ubuntu / Ubuntu Server [documentation] – University of Utah School of Computing: http://ubuntu.cs.utah.edu/ubuntu/ – XMission: https://mirrors.xmission.com/ubuntu/ *I personally use the Ubuntu mirrors at the University of Utah and XMission on a regular basis and both are excellent.

To update your mirror list in Ubuntu, edit /etc/apt/sources.list OR /etc/apt/sources.list.d/ubuntu.sources depending on your version.


For additions, removals, comments, or if I've screwed something up, please let me know.

posted by Mike A.

Your upload performance may have a bottleneck at the reverse proxy due to its TCP congestion control algorithm. It's easy to change.

The upload speed at my house was always crap, but my cable ISP recently gave me a big bump in upload speed. I was excited, but my upload performance stayed the same, which was a bummer. For a while I blamed my ISP for underhanded shenanigans, but I started to suspect the problem was on my end. I tried everything I could think of: NGINX proxy server configuration, router settings, tweaks to lots of things, but the problem turned out to be my Linux proxy server's TCP congestion control algorithm.

I hadn't heard of this before. Older congestion control algorithms like cubic depend on packet loss to signal congestion, so once packet loss is detected it drops the transfer speed. For me, switching to the BBR (Bottleneck Bandwidth and Round-trip propagation time) algorithm was the solution. BBR doesn't look at packet loss as a sign of congestion. Instead, it looks at total bandwidth and round-trip time to optimize transfer rate. This can be especially helpful in high latency situations since BBR waits for packets to arrive.

In certain cases, and definitely in my case, switching to BBR led to a massive improvement. I'm seeing about 10x improvement in upload performance, so now I'm actually getting the ISP's advertised upload speeds which I thought was impossible. In fact, even before my ISP's upload speed bump, I was probably significantly bottlenecked and had no idea. Ugh.

Switching your TCP algorithm to BBR

If you want to try BBR on your Linux reverse proxy, it's easy and reversible.

1. Check which TCP congestion control algorithm is active now: sudo sysctl net.ipv4.tcp_congestion_control

2. Enable BBR: sudo sysctl -w net.core.default_qdisc=fq sudo sysctl -w net.ipv4.tcp_congestion_control=bbr

3. Run the first command again to see if BBR is enabled: sudo sysctl -n net.ipv4.tcp_congestion_control

4. If you like the performance, you can make the setting persistent by editing sysctl.conf: sudo nano /etc/sysctl.conf (on Debian, Ubuntu) sudo nano /etc/sysctl.d/99-sysctl.conf (on Arch)

Add these two lines to the end of the file and save:

net.core.default_qdisc=fq
net.ipv4.tcp_congestion_control=bbr

BBR will now be enabled upon boot.

posted by Mike A.

Motherboard or SATA/RAID/NVMe drivers may not be the issue. The problem might be the way you're creating your Windows install media.

For five hours I banged my head against a wall trying to install Windows 11. I'm a Linux user, but I needed Windows for a particular program so I set out to install it on a newly built computer.

Everyone recommends using the Windows 11 Media Creation Tool to create the USB installation media, but that tool runs only on Windows. I didn't have a Windows computer so instead I used a Linux ISO writer, one that works great for creating Linux install media.

The computer booted to the USB, but very early in the install I was greeted by this message:

A media driver your computer needs is missing. This could be a DVD, USB or Hard disk driver. If you have a CD, DVD, or USB flash drive with the driver on it, please insert it now.

This message is pretty specific. The Windows installer needs drivers. So I went to my motherboard manufacturer's website, in this case Gigabyte, read the manual and created a second USB drive with the specified Windows 11 AMD RAID Preinstall Driver. The Windows installer could read the USB drive but the installer choked on it, didn't care, wasn't satisfied with what I was offering. I tried AMD brand chipset drivers, other drivers, nothing worked. I screwed around with the motherboard's BIOS settings for a very long time, trying different things. Still wouldn't install.

Turns out, you can't create Windows 11 install media with just any Linux ISO writer and I didn't know that. I had already tried several and none worked. Why does Windows install media need a special writer? What technical wizardry could a certain ISO writer be performing that others can't? Hell if I know, but I guess it doesn't matter anyway.

You need to use WoeUSB to create Windows install media on Linux. The ISO direct from Microsoft will do fine—you don't need to use their media creation tool—but this particular writer performs whatever black magic Windows requires for installation. WoeUSB is a command line application and is available either in your Linux distribution's repository or it can easily be added.

WoeUSB Installation

Ubuntu/Debian

sudo add-apt-repository ppa:tomtomtom/woeusb
sudo apt-get update
sudo apt-get install woeusb

Fedora

sudo dnf install woeusb

Arch (from the AUR)

yay -S woeusb

Using WoeUSB

It's important to first identify your USB drive with lsblk.

lsblk

Identify your USB drive. It'll look something like /dev/sdb. It's very important to get this right because WoeUSB could easily overwrite something important ... like your system drive.

To actually write to your USB drive and create the Windows install media, here's the command:

sudo woeusb --target-filesystem NTFS --device /path/to/Windows11.iso /dev/sdX

Replace /path/to/Windows11.iso with the actual path to your Windows 11 ISO file, and replace /dev/sdX with your USB drive identifier.

After that the program will run, the Windows install media will be created, and your Windows install should now work.

posted by Mike A.

Distro hopping is one of the joys and vices of Linuxhood. I've run most of the upstream (or relatively upstream) ones: Debian, Ubuntu, OpenSUSE, Fedora, Arch, Gentoo. I've also run a ton of the downstream distros like Linux Mint, Pop!_OS, Zorin, MX Linux, Devuan, EndeavourOS, Manjaro, Void, KDE Neon, Garuda, antiX, Damn Small Linux, Linux Lite, Kali, and the different DE flavors of Ubuntu. I've even run FreeBSD which isn't Linux, but UNIX-like. They all have strengths and weaknesses, and each of them will appeal to different users for different reasons.

Debian is always my go-to. It's great for servers and for desktop use and everything in between. But Arch, which has a completely different philosophy, is damn good too. I have Arch installed two of my laptops. And I really like Fedora. The others are more specialized but have big upsides in certain situations.

Here's the thing though, the dirty little distro secret: Distros are basically all the same. In fact, the majority of them really don't need to exist. Once you've got a distro installed, even the ones with heavily customizable installations like Arch or Gentoo, the subsequent experience isn't too different from any other distribution. Sure, each upstream family has its own package manager but while the syntax may be different among them, the package managers all operate essentially the same (Slackware and Gentoo are moderate exceptions here).

But come on. Those aren't big differences. For desktop use, each distro selects their supported desktop environments from the same pool as everybody else. Once you're in KDE Plasma or Gnome or XFCE or Cinnamon or whatever, that desktop environment is virtually indistinguishable from the same one on another distro. Most of them even offer graphical update programs that will update (and usually install) programs from the package manager without interacting with the terminal at all. I would argue, for desktop users, that Linux is far more about the desktop environment than it is about the distribution.

That's why so many Linux distributions distinguish themselves by creating either their own desktop environment or offering a customized one. That's where users live.

Home users, that is. But the distro likenesses hold true even on the command line. There can be differences in command line syntax, but even those differences are rare, and are limited largely to the names of packages, the syntax required of the package manager, and sometimes the file structure.

The big point is that distro hopping is kind of circular. And hey, as Linux users we've all done it. But after installing a new distro, I continue to be amazed at how NOT different it is from what I've used before, and that's okay. In my opinion, if you want to explore, try the far upstream distributions for each major Linux family: Debian, Arch, and Fedora. Whichever one you like best, stick with it. Don't overthink it. I personally think that any of these three provides a great canvas to start from and can be molded into exactly what you want.

If you want to follow the path further—let's say you feel most at home in the Arch world—it's fine to try Manjaro or Garuda or whatever, but know that the advantages are usually minimal at best and these distributions may not be better than the one they're forked from.

Anyway, that's my opinion. Linux is a joy no matter which distribution you select or how you choose to use it. Have fun!

posted by Mike A.

  1. Raspberry Pi Zero W (Debian Linux, testing computer)
  2. Raspberry Pi Zero W (Debian Linux, network inventory server)
  3. Raspberry Pi Zero W (DEAD; fried itself)
  4. Raspberry Pi 4 (Debian Linux – Not doing much)
  5. Raspberry Pi 4 (RetroPie – Basically a Sega Genesis and an NES)
  6. Raspberry Pi 4 (Volumio Media Server – Feeds music to my HiFi)
  7. ZimaBoard (Debian Linux – Backup server in my barn)
  8. ZimaBoard (Debian Linux – Family loft computer)
  9. Lenovo ThinkCentre (Ubuntu Server – NGINX reverse proxy)
  10. Dell R720 Server (OPNsense BSD – Router, DNS, VPN, firewall)
  11. Dell R620 Server (Debian Linux – turned off, screams like a banshee)
  12. Dell R720 Server (Debian Linux – webserver, Nvidia GPU for AI)
  13. Dell R720 Server (Ubuntu Server – Nextcloud Server)
  14. Dell R720 Server (Ubuntu Server – Media Server)
  15. Dell MD1200 PowerVault (No OS, media storage array)
  16. HP Desktop (Debian Linux – all purpose desktop computer)
  17. Beelink SER7 (Pop!_OS – main desktop & moderate gaming rig)
  18. Lenovo Laptop (Arch Linux – general use)
  19. Apple MacBook Pro (Arch Linux – general use)
  20. Apple iMac (Debian Linux – General use)
  21. Apple Mac Mini (Debian Linux – Kids' media computer)
  22. Apple PowerMac G4 (Gentoo Linux – just for fun, sloooow)
  23. Apple Macintosh Color Classic (System 7 Mac OS – fun, useless)
  24. Packard-Bell Tower (Microsoft Windows 95 – DOS-era gaming rig)
  25. Tandy 1000HX (MS-DOS 2.11, prehistoric gaming and nostalgia)
posted by Mike A.

Old DOS C:\ prompt 2.11

I started using MS-DOS 2.11 when I was eight. Later, I refused to move exclusively to Windows and lived mostly in DOS, but by the time Windows 2000 came along I was forced to leave DOS behind. In 2012, after 24 years with Microsoft, I was done and I switched exclusively to Mac OS X (later Mac OS). Later, in 2019, I chose Windows 10 for my new company so I used Windows at work and Mac OS at home.

By 2021 I was sick of all the bullshit, and though I'd tinkered with Linux for 15 years, I went all-in and switched to Linux for everything. My Windows work computer, my fancy iMacs and Macbooks, everything got formatted and replaced with Linux, and I didn't even bother dual-booting.

I doubt if I'll ever go back. Without question, I'm happier with Linux than any OS I've run in 34 years, MS-DOS included. I should have done it years ago. Hell, I'm in the process of gradually moving much of my company to Linux, even the desktops.

I distro-hopped for a while but soon realized that, aside from details, Linux is Linux. I settled on Debian, but I'd be just as happy with Arch or Fedora or OpenSUSE. I tried pretty much all of the desktop environments (DEs) and decided on KDE.

The reason I don't think Windows or Mac OS will ever lure me back is because Linux is infinitely customizable and changeable. If I don't like something or of I get bored, I can change it. I can make it my own, over and over again. My work computer and my home computer look and behave almost like different operating systems, but the same Linux terminal is always available in both.

In our modern SAAS-infested world, I can't quit Microsoft entirely, unfortunately. We still use Azure and Microsoft Active Directory at work; the value is just too good. (Our Azure VMs run Linux, though.) And we still have tenured holdouts at work who insist upon Mac OS or Windows so I still use both sometimes, mainly to fix them.

It'll be Linux for me for the long haul. I'll take my place in the Hall of the Geeks and that will be that. But I'll never again be hounded by my operating system for my money or my attention; it'll never needle my brain for social media integration; and it'll never prod me to use one browser or another. If that were all (and it's not) it would be enough.

posted by Mike A.

If you’re hosting a WordPress site behind an NGINX reverse proxy, it’s not easy or straightforward to get the whole thing working. WordPress, it seems, is hardly designed to be stuck behind a reverse proxy and it gets fully pissed off if you try. It took me two days to get it working right, meaning scouring lots and lots of forums, and finally I resorted to trial-and-error, which is the worst way to accomplish anything. I’m writing this to hopefully save you some time.

NOTE: I am not an expert. I am an amateur. This information isn’t meant to be authoritative, just a guide to the basics of how I solved this problem for my own purposes.

SYMPTOMS: From the get-go, WordPress wasn’t using CSS or Javascript at all, even for the installer. Everything looked completely text-based. The installer worked, though, but once WordPress was installed and depending on the settings I tried, images were broken, and WordPress URLs would redirect to the server’s internal IP. Also I would often get strange subdirectories inserted into the URLs, especially with the wp-admin backend. Also sometimes I’d get partial-SSL warnings from my browser.

I’m using NGINX as a reverse proxy on one server, then using it to route traffic to a WordPress site on NGINX on another server. This guide will probably work if you’re using Apache to run WordPress, but you’ve got to be using NGINX as reverse proxy.

SO THE ROOT OF ALL THIS is SSL not playing nice with the reverse proxy. WordPress really, really wants to operate through SSL, and although my reverse proxy sends and receives all external traffic via SSL, LAN traffic isn’t encrypted so the webserver sees the connection as insecure. All we really need to do is tell WordPress that the traffic is fine.

Reverse proxy setup

First, we need to forward some header information from the reverse proxy to the web server. At the most basic, assuming you have the standard NGINX “default” file in place, a reverse proxy server block looks like this:

server {
    server_name domain1.com;
    location / {
       proxy_pass http://192.168.1.111/;
       }
 }

There’s lots more configuration you can put in a server block, but this is all you really need to have a functioning reverse proxy route external requests to domain1.com to a server on the local network at 192.168.1.111.

Now, here’s the bare minimum that’s required in a server block to pass along the SSL headers required for WordPress, along with SSL info that isn’t within the scope of this article:

server {
    listen 443 ssl;
    server_name domain1.com;
    location / {
        proxy_pass http://192.168.1.111/;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        }
    ssl_certificate /path/to/fullchain.pem;
    ssl_certificate_key /path/to/privkey.pem;
    ssl_dhparam /path/to/ssl-dhparams.pem
}

After you've made the necessary changes, restart NGINX:

sudo service nginx restart

wp-config.php setup

Now we need to configure WordPress. We’ve got to edit the wp-config.php file, which is in the root of the WordPress file directory, but isn’t created until WordPress is installed. So install WordPress if you haven’t already (visit your web server in a browser and follow the instructions), and you’ll probably have to live without CSS and Javascript, but it still works. Once that’s done, fire up your favorite text editor and edit wp-config.php:

sudo nano wp-config.php

You’ll need to add a few lines to the top of the file:

<?php

     if ($_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https')
        $_SERVER['HTTPS'] = 'on';

     if (isset($_SERVER['HTTP_X_FORWARDED_HOST'])) {
        $_SERVER['HTTP_HOST'] = $_SERVER['HTTP_X_FORWARDED_HOST'];
     }  

     /**
     * The base configuration for WordPress ...

Once you’ve added the lines, save the file and restart NGINX on the web server:

sudo service nginx restart

WordPress address setup

Now take a look at your site and it should be looking much better. But if you’re like me, you’ll notice that once you start clicking around, the URLs begin redirecting to the internal server IP, which won’t work. This is because WordPress tries to be helpful during installation, and rather than just serve pages normally and allow the reverse proxy to do its thing, it wants to control things and sets the domain to what it guesses it should be. In my case, the domain was set to the internal server IP, not the external domain name, so every time I would load a page, it would force the site back to the IP.

There are two ways to fix this. The first, easiest way is to go to your WordPress site and go to Dashboard => Settings ==> General Settings. There you will find WordPress Address (URL) and Site Address (URL). They’re probably set to your server’s internal IP address. Change the values to your domain name, with the http:// included:

http://domain1.com

But if you do this wrong, or if you couldn’t access the dashboard in the first place, you might be stuck. Thankfully, you can edit these values by adding lines to the wp-config.php file. If you do edit the file rather than do it through the dashboard, your dashboard options will be grayed out since you hardcoded the values. To do this, head back to your text editor and edit wp-config.php by adding these lines:

define( 'WP_SITEURL', 'http://domain1.com');
define( 'WP_HOME', 'http://domain1.com');

Remember to enter your domain name in place of domain1.com. Now, there’s another option here, which is to use the reverse proxy server to set the domain name and tell WordPress to use the root. That way you can change your domain name or point another one to WordPress and it’ll be seamless. If you want to do this, instead of the previous code, insert this code instead:

define( 'WP_SITEURL', '/');
define( 'WP_HOME', '/');

Once you’ve added your chosen lines to wp-config.php, restart NGINX:

sudo service nginx restart

Fixing strange redirects

This last thing may not be happening to you. If your WordPress site is operating as you expect it to, then don’t bother with this, but in my /wp-admin/ section (the Dashboard), WordPress kept trying to insert an extra subdirectory in the URL which would break pages, like this:

https://domain1.com/domain1/wp-admin/blah_blah…

The extra subdirectory was the same as its folder on my server. I could remove the extra subdirectory from the URL and it would load the page, but it was really annoying to do it on every page load in the Dashboard. I am 100% sure I have misconfigured something simple somewhere, but I after lots of searching I couldn’t find it. It’s fixable as-is, though with another line of code added to the wp-config.php file.

     $_SERVER['REQUEST_URI'] = str_replace("/domain1/wp-admin/", "/wp-admin/",
     $_SERVER['REQUEST_URI']);

This is just a simple redirect. You can edit it for your own purposes. The “str_replace” command takes one URL or section of a URL and replaces it with another. In this case, I’m telling WordPress to remove the extra subdirectory, when it appears with “wp-admin”, and replace it with “wp-admin” only. You can play with this to add or take away things as you need. One day I’ll find the real problem and fix it, but for now this works fine.

So that’s it. Hopefully this was helpful. If I screwed something up, or if this article was helpful, let me know in the comments.

posted by Mike A.

Snappymail + NGINX proxy

This article explains how I configured Snappymail to run behind my NGINX reverse proxy. I’m discussing direct server installations, meaning running Snappymail independently, not as part of an integration with Nextcloud or Cloudron or some kind of control panel, although this article might help with some of those situations. Also, this isn’t a comprehensive article, so it won’t delve into proper NGINX server block arrangements or security considerations, etc. It’s just for getting it running on the Snappymail side.

Previously I ran Rainloop, a webmail client from which Snappymail is forked, and it worked fine on my server behind an NGINX reverse proxy. But it has become apparent that Rainloop is now a dead project and frightening security holes have remained unpatched.

I therefore migrated from Rainloop to Snappymail, which is being actively maintained, and which was a straightforward process. But after installation, the server greeted me with this error message:

An error occurred. Please refresh the page and try again. Error: Failed loading /mail/snappymail/v/x.x.x/static/js/min/libs.min.js

I’m no expert, so this message didn’t make the problem apparent to me. I thought maybe my web server wasn’t processing JavaScript correctly or maybe I was missing a dependency, or something else entirely. So to help prevent you from banging your head against a wall as hard as I did, here’s the solution, and it’s simple:

Because a reverse proxy sends traffic directly to the virtual server root, which isn’t what Snappymail is expecting, Snappymail must be told where to look for its files.

As we can see from the error message, Snappymail is looking for a file here:

/mail/snappymail/ ...

What we need is for it to be configured to start a level higher, so that it looks for the file (and all other files) in this manner:

/snappymail/ ...

There are a number of ways to solve this, including configuring a redirect at the reverse proxy or at the web server, but from reading this Github discussion thread, we see that the app developers included a setting to easily solve the problem without messing with server settings.

Method 1: Edit application.ini (if you don’t have app admin access)

Configuration can be done via the Linux command line by editing the configuration file directly. If your /data/ directory is properly configured, you’ll notice that the file is protected and can only be accessed or edited by root (your actual file path may vary depending on how you configured your Snappymail domains and web server).

sudo su

nano /var/www/html/mail/data/_data_/_default_/configs/application.ini

If you need help finding application.ini on your system, you can always search for it:

find -name application.ini

Once you’ve found the file and opened it in a text editor, locate the following line:

app_path = ""

Then edit the line to read:

app_path = "/"

This instructs Snappymail to look for its files in the app root, which will play nicely with the reverse proxy. After that’s done save the file and edit the text editor, then exit root and restart your webserver:

exit

sudo systemctl restart apache2 or sudo systemctl restart httpd or sudo systemctl restart nginx

Snappymail should now be working.

Method 2: Snappymail admin GUI (if you have admin access)

If you can access your Snappymail installation from its server’s network IP, which typically means you’re inside its internal network, you might be able to bypass the reverse proxy by navigating in a browser to its internal network IP:

http://192.168.1.11/snappymail/$admin

Otherwise, try navigating to your public Snappymail admin login:

https://www.yourdomain.com/snappymail/$admin or https://snappymail.yourdomain.com/$admin

Login with your admin credentials, then navigate to:

Config => webmail => app_path

The field will probably be blank. Simply enter a single slash: /

Then scroll all the way to the bottom of the Config settings and click “Save”. That's it. Your installation should now be working.

posted by Mike A.

As of this writing, notifications and popups in KOrganizer and Kalendar in KDE Plasma on Kubuntu and Ubuntu are broken and do not work with a standard installation. This brief article explains how to get notifications working.

At my workplace, I schedule a lot of one-on-one meetings with co-workers, and the start times for these meetings vary week-to-week for each meeting. I therefore need a robust calendar, and just as important, a robust reminder system that I can rely on so I don’t miss any meetings. Two calendar programs for KDE Plasma, KOrganizer and Kalendar, both of which can easily be installed with APT from the Ubuntu repository, are great options. They allow local calendars, or you can connect them to a CalDAV server.

Trouble is, when I would schedule an appointment with a reminder, no reminder would show up. I would get … nothing. I double checked the settings within these two programs and double checked the KDE System Settings, but no matter what, I couldn’t get it working. Notifications appeared to be completely broken, full stop. Obviously this would not work for me.

After doing a bunch of searching, I ran into this thread on the KDE forums that gave the right solution.

The module responsible for delivering KDE PIM calendar notifications is apparently “Kalendarac” and in order to run, the “libkf5akonadicalendar-bin” needs to be installed.

thebluequasar, KDE Forums

The trouble is a missing package that does not install as a dependency for KOrganizer or Kalendar. That package is libkf5akonadicalendar-bin, and it can easily be installed on Ubuntu or Kubuntu with the following command:

sudo apt install libkf5akonadicalendar-bin

There’s one caveat: If you plan to use KOrganizer only, which I find to be clunkier and not as good looking as Kalendar, but far more functional and full-featured, the notifications still won’t show up unless you also install Kalendar on your system in addition to libkf5akonadicalendar-bin. I don’t use Kalendar, but I keep it installed sort of as a dependency.

This should get your notifications rolling.

UPDATE February 2024:

This seems to be fixed in newer versions of Linux and notifications just work now, at least in Debian and Ubuntu. If you reinstall your OS for some reason, you may not need to worry about this again.

posted by Mike A.

In my garage shop, any dropped fastener that is too small to be arguably classified as a “bolt” has as little chance of being found, no matter the effort, as it would be if I had fired it out of a wrist rocket from a high-altitude Huey, in the dead of night, into the tropical rainforest somewhere between Rio de Janeiro and Santiago, and started my search “somewhere in the middle.” In fact, my research into the matter leads me to believe, with reasonable confidence, that dropped fasteners in fact alert a galactic agency that, in the blink of an eye (and before my brain even processes that I’ve dropped the part and realizes that I am, to put it delicately, “fucked”) senses the drop and beams the fastener into wide orbit around Pluto, even before it hits the floor. Sometimes they add the sound of a skittering screw for the amusement of staff, or omit the sound of a skittering screw, also for the amusement of staff.

Fun fact: The New Horizons probe, during its approach of Pluto, in 2014, was narrowly grazed by a speeding sear screw out of a 1941 Smith & Wesson, beamed there in 1982 after a homeowner near Des Moines accidentally flung the spring-loaded thing across his garage that November (hilariously, he searched for it for three hours, becoming progressively more irate and inebriated, until he gave up and airmailed the Wesson frame through his Pontiac’s rolled-up passenger window and onto a lap belt on his bench seat). The New Horizons probe, narrowly unharmed, famously photographed Pluto but nearly became tumbling space garbage had it encountered the sear screw, which it hit at over 60,000 miles per hour, at a different angle. The sear screw, meanwhile, was deflected and was sent tumbling directly for Alpha Centauri, instantly becoming the fastest-moving man made object ever, and also, and in a few centuries, it will become the first Earth-based object to arrive in a another solar system (uncredited, of course).

posted by Mike A.