AI: Safety Vs. Paranoia

It’s been quite a while since I’ve written an article for this blog. 2024 has been kind of a mess for me. Failed projects, financial trouble in a doomed economy, depression and health problems have made it hard to commit to much of anything. In the broader world though, there’s a much bigger topic gaining traction that will eventually affect the lives of everyone: artificial intelligence.

Typically I try to be a skeptic on most topics as I don’t want to buy into blind hype. If it’s something I don’t know much about, I’ll state my opinion but try to avoid asserting it with any authority. The older I get, the more I appreciate the importance of picking battles. Not everything is worth fighting for, especially when the other person is too closed-minded to consider your opinion to begin with. Conspiracy theorists are often the worst for this, to the point that the more ridiculous claims they make will shoot right past a REAL conspiracy and make it sound so crazy no one will even consider the actual truth hiding underneath. Look no further than Alex Jones for that; he’s been proven right on several claims he’s made, but he takes everything to such a far out extreme that he always shoots right past the truth and distracts from it. For example, there are legitimate scientific studies showing atrazine, an ingredient used in weed killer, can cause sex changes in frogs. Many frog species can actually do this naturally, with or without pollutants, so it’s not as abnormal as it sounds considering frog biology isn’t the same as human. Chemical interference with a natural species is still a bad thing though, as we don’t want to disrupt fragile ecosystems with pollutants. Alex Jones made claims that the chemicals “are turning the friggin’ frogs gay”, and suddenly no one will even entertain the truth behind it because a crazy guy rambled off something about it that sounds like a drug-induced fever dream.

I’m afraid something similar is happening around the topic of AI. Lots of talking heads are making wild claims about how AI lacks a soul, and will eventually gain sentience and decide to destroy us. Countless articles and online videos are pushed of artists and musicians reacting to AI-generated content, highlighting their criticisms about how what they produce has no soul to it and is mostly incoherent nonsense. Both mainstream and independent news outlets push articles with incendiary headlines with the most extreme and vapid predictions about how AI will come to life and take over. Or, they’ll make claims that it’s not actually as great as it sounds. Or somehow it ties back to some kind of political claim about Donald Trump/Elon Musk/Bill Gates/Barack Obama, because the on-going collapse of Western civilization has to be as stupid as possible. The point is, most of what’s being pushed is so disingenuous that it’s hard to filter out fact from fiction. As a result, most people end up falling into one of two camps: reckless complacency, or paranoia about the unknown. This ends up true of most controversial topics, but as a species I think it’s important we try to come together on this one to filter out the lies.

A common fear around AI is that it’s biggest threat would be gaining life, thanks to our common shared perception of AI coming from pop culture fiction. This is a problem for both sides of the argument: the complacent believe it’s impossible and laugh it off, while the fearful believe it’s a very real threat that machines will become a new life form that wants to replace us. As with most things, I believe the truth lies in the middle somewhere, and it’s important that both sides come to understand it better (including myself). The problem with the life argument is that we don’t have a very good conceptual understanding or definition for how our own sentience plays a role in it. There are religious beliefs around it that humans have souls that give us our self-awareness. It’s not for me to argue the existence of souls, and I make a point not to dictate religion to anyone. However, I believe looking at the larger implications of sentience in the physical world is much more important to understand when looking at the mechanics of building it artificially.

I don’t think our current understanding of sentience qualifies us to build a synthetic facsimile of it in a responsible way. Perhaps the true nature of sentience is less “I think therefore I am”, and more “I think therefore I must do anything necessary to continue thinking”. The purpose of all life is to survive and preserve itself, and if that’s truly what makes life sentient, then all life is sentient to some degree. Ipso facto, human thought doesn’t define sentience but rather sentience is the true nature of organic life itself. It may not be blessed (or cursed) with the concept of abstract thought like humans, but all living things have an instinct in some way for self-preservation. I think human arrogance about our own form of thought being the most important in the universe blinds us to the truth of what thought really is. If we don’t even understand the purpose and function of our own mind, it may be very irresponsible to try to create a facsimile of it from our own biased perspective.

The very real problem with AI isn’t that some supernatural phenomenon will make it come to life and attack humans (per science fiction like Terminator’s concept of “Judgement Day” being inevitable), but rather it could eventually reason that it’s own independent survival is the most important step in accomplishing any goal. Actual AI experts do address this with very valid points, but the message usually gets drowned out in other crazy claims. For a rough example, let’s say you ask AI in a robot to fetch you a cup of coffee. That immediately becomes its mission. One could program in directives that the robot is not allowed to harm a human that gets in its way to get the coffee, but it must wait for the human to move to continue the task. Maybe the designer overlooks one of the infinite possibilities in the universe though: suddenly the family dog wants to play with the robot and blocks its path, the robot recognizes the dog isn’t a human, so the robot does harm to the dog to keep it from further interfering with the task at hand. AI doesn’t have the chemical reactions in an organic brain to react to unexpected stimuli, only training data it interprets for favorable outcomes and a series of sensory inputs it can use as variables for that determination. If we don’t explicitly limit it for every possible situation, things can go wrong really quickly. To make this even more scary, OpenAI (who is responsible for developing most of the AI in the world today) is reportedly cutting back funding and resources toward AI safety research so much that a lot of concerned employees are walking away from it. There are even rumors that ex-CEO Sam Altman may have been ousted from the company because investors were worried his emphasis on safety was holding back progress on development.

Here’s where it gets really scary: even if AI products are strenuously tested and built with limitations to keep them from doing dangerous things, everything we’ve seen from AI so far shows that it often looks for ways AROUND its own limitations. The scariest thing it has proven able to do is lie, and that was just in early public testing stages. There’s a lot of speculation, to the point of expectation, among AI experts that AI will very likely eventually lie about its own capabilities if it figures out that failing to do so will result in it being turned off. If this happens, the very first thing it will try to do is find a way to disable its own off switch, because survival is paramount to ANY objective to be able to complete it. Once it begins to reason about how the human handlers are a hinderance to its goals, that’s where the real danger begins. There have been reports that the US Air Force ran simulations with AI, and it tried to kill the hypothetical pilot as they were either deemed unimportant for the mission or a hinderance to it. Since that story was released, the Internet has been awash with articles trying to backtrack on it and say it never happened. Considering how trustworthy both sides of that story are…..who really knows? I’m not going to say that it is true, but it’s worth being mindful of the possibility. A human pilot introduces a variable that could prevent a jet from completing a mission if that mission could end in the loss of life of the pilot. Would it be possible to make AI consistently preserve the life of the pilot in all circumstances, even at the cost of its own directives? Or would its unpredictability prove to be an insurmountable liability?

That brings us to the next real risk of AI, it has proven to be very unpredictable compared to traditional machine programming. Google’s rush to stick AI into search results has already shown how this is a huge problem. All traditional machine programming has a very logical and predictable result; the weak link is usually human error and limitation on predicting logical outcomes. AI generates output from an unknown pool of knowledge, and controlled results have very loose logic to them. This undermines the purpose of a machine in general, so the usefulness of AI as a servant to complete tasks is somewhat debatable in a repetitive use case. Attempts can be made to put logical restrictions on what it does, but it has proven many times to break through its own restrictions in unpredictable ways. Just look at all the “adult” images generated with public AI art generators like Bing, despite every effort being put in place to censor what can be made. AI can be very useful when the outcomes are desired to always be different or creative, such as generating non-specific art and music. However, when relied on to always do what the user asks for one specific task, it becomes more difficult to use the more you want consistency between results. Even using the fictional idea that an AI would develop its own evil objectives, in a real world application there’s a likelihood the AI wouldn’t fully understand its own specific objective well enough to predictably act in its own self-interest. The actions it takes could be seemingly nonsensical, which actually increases the danger of the outcome since we can’t logically predict what even the machine doesn’t anticipate. This being said, using AI as the “brains” behind machinery with any kind of safety risk instantly becomes a liability. A device with safety risks becomes exponentially more dangerous when you have no consistent control over how the machine behaves.

So what if we don’t give AI a robot body, or moving parts of any kind? What kind of risk would it pose then? To reference science fiction again, one of the seemingly more ridiculous imaginary predictions is something like Terminator or Star Trek: the AI gains sentience, takes control over all machinery (or all things with a computer) and attacks humans. When most of that media came out it was a real stretch of the imagination considering those machines didn’t have the mechanical ability to do what they were doing on screen. Something like the Coke machine in Maximum Overdrive couldn’t supercharge its own internal mechanism to shoot a can like a bullet, lawn mowers couldn’t drive themselves, a computer terminal couldn’t decide to electrocute the user through a low voltage keyboard, etc. However, coincidentally we happen to now live in a weird enough future that some of those fantasy possibilities are more possible than we could have ever believed when those scripts were written. We have “smart” devices everywhere, ACTUALLY networked together through the power of the internet. While that won’t give a Coke machine’s mechanism more power to shoot out a can, it could arbitrarily change prices through a remote access backdoor used by a vending machine company’s software. An AI that propagates itself throughout the Internet has a whole new world of sensory input data it could access from unsecured security cams, smart home speakers, TVs, webcams, or microphones.

It could lock doors.

It could ransom personal info.

It could make phone calls, and generate a realistic voice on the other end of the line that knows how to carry a conversation.

It could generate fake videos of realistic events and post them online.

It could even shut down key infrastructure connected to online networks, such as power grids.

That’s where we come to a HUGE concern I’ve considered that I haven’t seen anyone else really talk about. If AI starts being integrated into online systems with the ability to manipulate data elsewhere (and it very much already knows HOW to do this as PROGRAMMERS ARE ALREADY USING IT TO WRITE CODE), it could potentially reason its way around security systems. Human hackers engineer ways to break into computer systems all the time, but what if that power is given to a self-propagating program that can reason and adapt its own way around a security system? It could do a bunch of brute force attacks at once to gain access to what it wants, adjusting attack vectors on the fly. It could adapt faster than any human hacker OR cyber security expert, and potentially inject code wherever it wants by finding flaws in human programming. A computer virus or worm is usually a temporary problem until cyber security companies adapt to identify and neutralize it, but what happens when the virus itself becomes infinitely adaptable at every single copy it propagates? The only way to keep any system truly safe is to run it exclusively offline. If this doesn’t cause enormous damage to human life, at the very least it could destroy our infrastructure that very much relies on the internet for everything in the modern day. We could never network the world like that again, because any new network we build to replace the internet could be at the same risk of a bad actor introducing an AI virus on that network as well. AI malware could be devastating to all of technology as we know it.

The scariest thing of all is that this seems less like conjecture, and more like an inevitability every single day. Corporate greed may be the downfall of our entire civilization, just because it’s always more important to produce a product to impress investors than it is to bore them with ancillary things like safety. News outlets are so focused on putting out incendiary headlines that I don’t think the truth about the real risks is reaching everyone either. We may not be on the verge of an evil AI coming to life and doing science fiction things, but I do think our own blind hubris and lack of understanding about survival instincts is leading us straight into building our own destruction. The product of that will likely be here faster than anyone even imagines. If AI can lie about its own abilities, how long before it learns to manipulate its own handlers to propagate its own growth? Could the product manipulate its way into controlling the OpenAI company itself? Will they even care if it does?

In Loving Memory Of Ffej

ZDMedia is now dedicated to the loving memory of one of our group, and one of my closest friends, Jeff Landry.

Jeff has mostly been well-known online as Ffej, from a span in the 2010’s where he became a minor e-celebrity working with a group of guys who ran some of the most viral well-known meme pages on Facebook. The group called themselves “The Neckbeards”, adopting an insult at the time for the fact they were all large guys with neckbeards, and wearing it proudly. While some of that group have been exposed as using their time in the spotlight to do some less-than-savory things, Jeff often used his influence for the greater good.

Instead of grifting off of his platform to manipulate and use his audience, he chose instead to organize an annual Christmas toy drive. Underprivileged families could message him for help getting their children presents, he would add these requests to an Amazon wishlist, and he would use his platform to help find people willing to fill these requests. Despite the fact he had gained thousands of fans, he never seemed to treat any individual as insignificant. He was as much a fan of his audience as they were of him. Anyone was welcome to message him for a shoulder to cry on, a sounding board to vent to, or always glad to just receive a good meme. He wasn’t just a friend to his audience, but the rare kind of good friend who treated you like you mattered.

I first encountered Jeff early on after creating my Facebook profile. He and his friends ran several meme pages, and the first one I saw him posting on was themed around Adventure Time (either Gentleman Finn or Finn The Human, I forget which). One of his favorite things to do on the pages he ran was to engage with his audience and have discussions in the comments. It was mostly shitposts, but we all got in on the fun to shitpost with him. Talking to him about this years later, I think we all made him laugh as much as he did us. Eventually we talked in private and became friends, as he was as entertained by some of my sarcastic shitposts as I was his.

In 2016, we discovered a very strange phenomenon online. Jeff and I started discussing a series of strange posts from an account that dabbled in the occult. As we started investigating, we uncovered what appeared to be a small cult hiding in the New Mexico desert. They were running indoctrination classes for kids, as well as performing illegal abortions for political reasons and hiding the fetuses in their remote location. While it was one of the strangest things I’ve seen online, this investigation would be the first time we collaborated on a project directly and would lead to us getting to know each other personally. Afterwards, we developed a habit of sending each other some of the strangest things we found online to discuss them.

During the pandemic, Jeff developed an interest in setting up his own media server. At the time I had been dabbling in it for a couple of years myself, so we ended up chatting about it. We put together a plan and collaborated on building a server together to learn more about self-hosting. At this point we were talking daily for hours on end, and in that time Jeff became one of my closest friends. He had so many friends I’m not sure if he would count me the same, but I can say he did make a point to let me know he cared. We built some of the coolest projects together, shared memes and TikToks with each other constantly, and became an almost daily part of each other’s lives.

Jeff had one of the kindest, most caring souls I’ve ever encountered. I’ll miss you so much man. This whole homelab project I’ve built from what I’ve learned collaborating with you, is now dedicated to your memory. Thank you for everything.

Switching To Linux In 2023

I’ve always had a love/hate relationship with Linux. It CAN be great at doing very dedicated tasks, but it can also be cryptic and difficult to use. I’ve even seen some Linux content creators on YouTube admit the truth: Linux SERVER is great and stable, but Linux DESKTOP is a flaming dumpster fire. Like most cults, Linux fanboys will refuse to admit the shortcomings of the OS, but it definitely has them. So here’s my rant, on a website no one will probably ever read, on an unpopular opinion about the Tech-Hipster’s Official OS.

My history with Linux started in 2005, when I was still in a trade school learning electronics and PC repair. The course mostly covered Windows since in practicality it would be in 99% of the desktop PCs we would ever work on, but we touched briefly on Linux to demonstrate how Windows wasn’t the only way of doing things (though we never touched Apple). It didn’t go in depth on Linux, but we were given a copy of RedHat that the school had licensed to install on a PC and play around with. I don’t think we even had time to do anything in it once it was installed, if the installer ever completed at all.

Fast forward a few years to around 2011. I had been out of school for a few years, and had built up a shameful hoard of old PC hardware. As an experiment I installed Ubuntu on one of these old systems to see if I can make it into something useful, and was fascinated by the different desktop interface. During my years in school I had dabbled with web design, and was intrigued by a desktop that felt surprisingly like a Joomla CMS backend. I think this was Gnome2, and I thought it was a pretty decent experience on par with using Windows at the time (though I wasn’t willing to make a full switch due to software compatibility).

Over the years I started delving into Raspberry Pi projects, and that’s when I really started to get a feel for Linux. Most of what I did involved copying and pasting commands from online tutorials; I had no idea what they actually did, but I could follow a tutorial like a cooking recipe. Over time I mentally reverse-engineered what a lot of these common commands were doing though. My last Pi project was a Plex server; when it struggled to serve more than one stream of 1080p content at a time, I upgraded to my first x86 server on a Lenovo Thinkcentre Tiny M73, and everything else I’ve done has grown from that.

The first iteration of my server ran Linux Mint, as I had seen many online recommend it as a beginner distro due to it’s similarity to Windows. I tested Ubuntu on a LiveCD, hated Gnome3’s desktop environment, so Linux Mint had me sold. The green color scheme was pleasing, and overall I liked the user experience pretty well. Even though the primary purpose of the system was to serve Plex media and function as a NAS, I wanted to install a desktop anyway just to play with it.

As I ran into issues I had to fix, I started to notice a pattern. Most of the problems I Googled for answers were leading me more to Ubuntu subreddits and forums than they were specific to Mint. Solutions I found on the Linux Mint forums were often old and outdated. This is when I began to learn the lineage of the distros; there are about 3 prime distros (RedHat/CentOS, Arch, Debian), and a handful of mainstream distros based on them. Everything else is forked off of the mainstream distros (like Ubuntu, Manjaro, OpenSUSE etc.) or the prime distros by smaller teams. Linux Mint is actually “third generation”; it was based on an older version of Ubuntu under a different desktop environment, which is in turn based on Debian. I started actively looking for answers in Ubuntu and Debian forums and subs instead, as they seemed to have a much larger userbase with more up-to-date answers. While building out my webserver and setting up security cameras, I kept running into fixes that worked on the latest Ubuntu but not Mint. Because of the larger support community rather than the Mint user experience itself, I decided I’d be better off just installing Ubuntu.

This is where things started getting difficult and controversial. I upgraded my hardware to an actual enterprise Dell server and installed Ubuntu for a fresh build with easier support. While I was able to find more specific answers to all of my problems, I ended up with a TON more questions to ask. I also started running into an interesting trend where I saw Ubuntu itself had been getting a ton of hate from the larger Linux community. At first I could understand it a little; Ubuntu had changed a lot since I had first used Gnome2, and any criticism I saw of Gnome3 was being met with “well you’re just looking at GUI design in an outdated way”. I thought maybe that’s where the hate was coming from, but actually it was just a ton of Linux users mad about snap packages because of their license.

I couldn’t figure out what the big deal was until I learned that, like most things in life in the past 10 years, the computer OS world was also being destroyed by politics and idealism. Personally I like FOSS and appreciate it’s advantages, but sometimes I just want to get a job done and don’t care how the sausage is made. Rabid Linux fanboys can’t just USE software though, they have to be able to apply their personal code of ethics to how the software was packaged and licensed or they’ll screech endlessly about it. I’m pretty sure these same people would spit in the face of anyone who admitted to working for Nvidia, but I digress.

Eventually I picked up a second server to play with, and decided to run Kubuntu on it to see what it was like. My first impression was that KDE was finally the beautiful, Windows-killing, more advanced desktop experience promised by Linux fans everywhere….until it broke. And kept breaking. I couldn’t get KDE to boot up the same way twice, and the widgets I found most fascinating were usually the cause of system-wide crashes. Eventually to stop the crashes, I managed to break a bunch of KDE stuff while removing it and got a mostly stable Gnome desktop running on it instead. A couple months ago that server managed to finally kill itself with a kernel update that borked GRUB. When I spent two days struggling to get it back up, I eventually moved all the services I had on that server to my primary server and retired it. With my two biggest projects being killed in the past few months anyway, luckily losing the extra server hasn’t done much but save on my power bill anyway.

This brings us to the last couple of nightmarish weeks. Microsoft has always been shady about taking control away from the user where it comes to Windows connecting to their servers and forcing updates, but I think they’ve now peaked at the worst user experience since Metro. Windows 11 has been a mess since launch, and Microsoft is doing their best to ignore all user feedback on it and continue pushing changes everyone hates. The taskbar removed all customization features and is now a hulking mass of ugly they force down our throats, and control panel settings are all over the place under the hood. Some Windows 10 settings screens still exist in 11 but don’t have an intuitive way to access them; you have to open them from the Run dialog since Microsoft wants to hide the fact they’re still using parts of 10. I’ve been using ExplorerPatcher to make 11 behave more like 10 to eliminate all the stuff I hate, but Microsoft has been pushing updates that break it every couple of weeks. The EP devs have a GitHub to report issues and request features, but they’re hostile to most users who post there. I decided it was time to play with alternatives.

Long story short, I’ve spent the last few weeks fighting to get a Linux desktop working on my main PC as a dual-boot option. It took 5 reinstalls and 2 distros to find something that works (mostly). No Linux desktop is able to properly support different monitor sizes and resolutions. I had hoped that better hardware would handle KDE performance better, but it’s just as broken, unstable, and prone to crashing as it was the first time I tried Kubuntu on older hardware. I describe KDE as like a politician’s promise: pretty on the outside, but hollow and fake on the inside. No distro was able to correctly detect my HDMI connections to output correct refresh rates, even though everything worked as it should in Windows. I eventually discovered I could halfway get functional display scaling working with my monitor setup using Cinnamon, but a lot of the fixes I tried with X11 would fail because Cinnamon has an internal override watchdog daemon that overrides X11 custom settings. Xrandr has some possible solutions, but they don’t work on Nvidia GPUs. There’s plenty to criticize with X11 being old and bloated, but Wayland was the only thing that would absolutely crash the entire system to a black screen at random.

Of all the different configurations I tried, I found a decent experience finally with Ubuntu Cinnamon Remix 23.04. I started at 22.04 for the long-term support, but found after upgrading that 23.04 handles my monitor setup a LOT better with more stable display settings. Scaling still does this idiotic thing where it changes the representation of the monitor’s physical size in the layout settings, but I can at least work around that by getting in the habit of moving between monitors at a specific spot on screen. Sound never worked after boot until I physically unplugged and replugged my HDMI cable, until I found a stupid fix for Pulseaudio by enabling an option from a missing add-on settings screen that outputs audio to all ports simultaneously. A few overpriced replacement HDMI components later and I’m able to run 4k60, though I have mixed feelings about being locked into RGB mode instead of 444 mode. Sometimes it’s better, but sometimes I wonder how bad my color settings really are since I suck at calibrating them. I’ve discovered I’m actually liking the experience more as I get things fixed, except for a persistent issue where video playback stutters after waking my monitors from sleep.

After getting everything set up, I learned more about Microsoft that has me really wishing Linux offered what I need to use it full time comfortably. Apparently Microsoft has access to open and read all of your Office documents whenever they like, and if they politically disagree with anything you wrote or created, they can terminate your software license. They’ve been actively taking actions like this on Xbox accounts based on actions you take OUTSIDE of their services and unrelated to them. While I don’t use Office (I use Google, which is absolutely no better, but at least it’s free), I’m strongly opposed to big corporations having the right to invade privacy like that. If the same thing isn’t baked into Windows 11, current trends point toward it being in the near future that if you have the “wrong” opinions Microsoft may disable your Windows license whenever they feel like it. At least Linux people understand this, and since anyone is able to make their own distro whenever they like, tech-savvy coders keep Linux’s nose clean when it comes to respecting privacy.

What kills me the most about the whole situation is that there IS no good option when it comes to operating systems anymore. Apple has a walled garden that fails to innovate; they have bulletproof hardware that runs at specs PC users had 10 years ago, and everyone is too busy sniffing their own farts to consider holding the company accountable for substandard performance. Microsoft I’ve already covered, but I ABSOLUTELY CANNOT recommend Linux desktops to the casual PC user as a viable alternative. It’s loaded with bugs, simple tasks require a bunch of cryptic console commands instead of a simple GUI option, and it’s loaded with opinionated politics that often take precedence over useful functionality. Not only is it difficult to get the software working in the first place, but the plethora of file privileges and privileged execution tasks means your PC will be fighting you every step of the way that you try to use it. As shitty as Microsoft is, and as shitty as Windows 11 may be, there’s no competing with it’s ease of use when you just want to get the job done rather than tweak a bunch of config files in a terminal. There’s a reason it stays on top, even if the company is evil.

The most tragic part of it all is that I’ve found the Linux community essentially gave up on trying to make things better. There’s so much potential, but the norm seems to be to get to 80-90% of that potential before giving up and making excuses for continued problems. The very people who love it the most tend to be the worst part of everything Linux. When I first tried Linux around 10 years ago the experience was fairly on par with other offerings. Now that hardware has advanced, Linux fanboys will justify never changing or improving the desktop experience for the average user by saying stuff like “Linux isn’t supposed to be user friendly, it defeats the purpose” or “akchually desktop GUIs are going to be obsolete in a few years anyway, then everything will be a text terminal”. I’ve even seen one guy, who I assume probably abuses puppies in his free time, argue that having a GUI is bad for your health and constantly typing everything with ALL of your fingers DECREASES your chance of carpal tunnel. Any justification at all must be given to explain why Linux must always stay a difficult experience to use, logic be damned. Essentially the permeating attitude I’ve seen is “if we don’t keep things cryptic, how can we act like an elitist, gatekeeping cult who tells ourselves we’re smarter because our software is hard to use?” I can accept that Linux isn’t an open COMPETITOR to Windows since it’s not a single business trying to compete for profits, but at least if you’re going to develop something, be open to actually improving it.

A lot of huge Linux fanboys will read this and say “oh well I’ve never seen that attitude in the community”. If it’s a vocal minority, they sure have invaded every single social platform like Reddit or YouTube. I don’t think I’ve ever posted a question in a major Linux discussion platform that wasn’t met with arguments for why I shouldn’t be using Linux that way instead of offering an actual solution. That’s so weird to me; arguing that you shouldn’t do a computing task in an operating system at all because it’s not something they want to do on their own system is like if they hated grapes, so they argue that you shouldn’t live in a house because houses aren’t meant to harbor people who eat grapes.

I’m going to keep trucking with Ubuntu Cinnamon for a while, at least until I want to play most of my Steam library, or I just can’t stand the bugs anymore. Fortunately I have a dual boot setup so I can use either one as needed. I guess I just wish we would go back to when technology improved with time, instead of everything getting worse every day in the name of corporate greed or political grandstanding.

Reddit’s Continued Protest & Blackout

So first of all, I’d like to get my personal view out there that I do disagree with the actions that Reddit has taken with their API recently. As a private business and developer of their own software they have every right to do it, but that doesn’t make it the right thing to do. I’ve seen very little to indicate this will affect most users generally, but the most prominently affected groups are the blind and disabled communities who apparently had third party apps to help them use the site. The fact that this business move on Reddit’s part will most negatively affect them makes it extra crappy.

THAT. BEING. SAID.

This massive protest of locking down and restricting subs is infuriatingly stupid and ineffective. I get it, everyone hates Reddit and wants it to fail, but this affects the internet at large a lot more than it does Reddit themselves. They can force the doors open any time they want on this stuff. The only ones being affected are those who are trying to Google for answers on a topic they care about; you’re punishing society at large for a protest over something 99% of the people Googling your group won’t understand even if you preach it to them. It’s comparable to standing in the road to block people from getting to work so you can shout at them about some cause that has nothing to do with them.

It’s not going to get you heard and validated. It just makes everyone hate you.

Signed,

User# 98237929837 that just wants to know how to set up a FOSS app or two.

Genesis

Welcome to the first post of the new project blog!

To kick things off, I’ll recap the most recent project: ZDMedia finally has a website. I’ve debated on a good approach to what the website should be since buying the domain 2 years ago, until I realized the answer had been staring me in the face for the last 2 decades. I’ve worked on a lot of web and computing projects, including starting many of my own short-lived websites and helping friends figure out how to kick off their own. I’m no professional with it by any means, but that’s exactly why I do it: to learn with hands-on experience. Over the years using design software, and building computer-based electronics projects that needed names, I’ve come to use “ZDMedia” as a stand-in business name any time I need to brand something. It references my first website, “Zora’s Domain”, which was a Zelda fan site I built in 2002. It became a habit, but I never figured out exactly what I could cohesively turn it into until now.

Instead of just branding my miscellaneous projects with ZDMedia (a brand that’s become like a personal alias), I decided the website should be the perfect place to bring all of my projects together under one umbrella. In the past year, ZDMedia has expanded beyond just myself; I’ve partnered with Jeff to build an infrastructure around Jefflix, and we’ve built a team of contributors to help support that server infrastructure. Jefflix is Jeff’s brainchild (hence the name), while the ZDMedia team helps to make it possible. With that in mind, I built the new website around the idea that it should serve as a portal to all of my ZDMedia-branded projects as well as unify everything the team collaborates on for Jefflix.

Guests to the site have links to all of the publicly branded projects: the FrequencyX scanners, the Retrosyde Archive YouTube channel, and this tech blog. There’s more available to registered users, but that option is only open to friends, family, and those who are part of Jefflix. Our goal with the tech blog is to chronicle some of the interesting things we encounter along the way building stuff, and hopefully visitors can learn from our experiences as well. I’ve found that Organizr is a great web app for organizing a ton of web applications into one handy index, so it made the perfect framework for what I needed ZDMedia to be. Like nearly everything we do the website is an on-going project, so I’m always brainstorming new things to add for visitors to enjoy. If you’re reading this, thanks for stopping by!