Wednesday, May 31, 2023

It's all connected: add blockers, micro payments and the sustainability of personal blogs and projects

Recently, Cory Dansfeldt's blogpost, titled "I block ads" went viral on Hacker News. 

I find myself doing much the same things he does.

How I block ads

I use Safari extensions or plugins on macOS, iPadOS and iOS like: Wipr, 1Blocker, Hush and Vinegar. The reason I paid for these apps is that they make the browsing experience so much more enjoyable, less cluttered and faster. Besides all this, I also block ads with the help of NextDNS. This has as an added bonus that whenever visitors are on my guest Wi-Fi, their TikTok and Instagram and such, are also blocked, which has the added benefit of a truly undisturbed and cosy evening. 

Ads are malware?

All this attention grabbing is an attack on the mind, a focus sapping drain on our happiness. That is probably why I found this comment by user "Nextgrid" very insightful:

There is no difference between ads and malware - both are code that uses your machine to make it do something undesirable, and often stalk you (modern adtech is often way more powerful than the spyware of the old days) in addition. It is your absolute right to block all malicious code.

Operating systems mostly wisened up to their earlier vulnerabilities and patched most of the avenues malicious code could load itself on your machine. Browsers countered the early abuses of the pop-up window feature by blocking them and nobody complained. Operating systems and browsers should include ad- and spyware-blocking by default just like they block conventional binary viruses.

Of course, one problem with blocking ads is that it's really hard to distinguish them from normal text or images on websites. So out-of-the-box blocking will always have to rely on blocklists like the ones that uBlock Origin uses. These lists depend on knowing the bad actors in this space, and the domain names that they serve ads with. Blocking all JavaScript will stop some trackers, but it will also break many features of websites. 

Google makes it more difficult

This will always be a cat and mouse game. Google upped the antes by making a fairly good web browser, Chrome, and now that more than 50% of all internet traffic globally goes through this browser, they are making it harder to block ads in it, as several headlines of recent well illustrate. Well played, Google.



Are all ads evil?

An interesting opposing view was that of user "jader201", who wrote:

In 2004, I started a small website on the side that had no business value at all. It was just a small community of video game fans (specifically Animal Crossing), and pretty quickly, the costs outgrew what I was willing to afford on my own.

So I opened up donations. Unfortunately that didn’t get me very far, so I eventually— and begrudgingly — added ads. I was very intentional about keeping them as unobtrusive as possible: just some banner ads at the top and bottom of each thread and forum, and a square next to the first unread comment. I hated pop-ups (the kind that opened in a new window — very common at that time), and I didn’t have them on any other page on the site (just forums).

This gained me about 10x the revenue I got from donations alone, and was able to scale the site’s hardware, allowing me to keep the site up and running for nearly 20 years, and provided some modest passive income for a time. (I just recently sold it to someone else, but it’s still up and running.)

My point is, that wouldn’t have been possible without ads.

This is an interesting point that I also run into. It's almost impossible to live from writing, unless you are very well known. I hear Jason Kottke can live from the income generated by his blog, but he started 8 years earlier than me, and he puts way more time and effort into his blog than I ever did. 

I like to write about technology, photography, financial well-being, and generally about whatever I fancy that day. But I don't even dream of making a living of my musings. In fact, very recently, I've signed up for the Amazon Affiliated program again. I think I tried it in 2017 but didn't generate enough clicks, so I was kicked out of the program. Also, there are now Google banners on my blogs. They don't generate any money yet, but a guy's got to dream, right?

Ads are a lose-lose outcome

The reason I resisted having Amazon affiliate links or banners on my blogs and websites for so long is a simple one: I want to practise what I preach. I don't like to see ads on websites; hence, I block them. I don't mind affiliate links so much, but they are not ideal either. 

Interesting to me was the many comments on Hacker News focus on not how they are blocking ads, but why. It seems to me that many of us, technologically inclined folk, feel deep down inside that although we have every right to block ads, we are bereaving hard-working bloggers and maintainers of small forums and community websites of the money they so desperately need to keep afloat financially. 

How to pay makers/writers/authors/singers directly?

I am currently subscribed to ± 230 RSS feeds. Some of these are not from websites or blogs, but merely from Twitter accounts that I am able to follow because Nitter turns any Twitter account into an RSS feed. I would never pay for all that content. Even If every message that my RSS reader downloads for me would only cost me $0.10, then still I would be spending more than $5 a day on that content, much of which I don't read but simply scroll through while scanning the headlines for the hidden gems. 
So building a paywall, maybe with HTTP 402 "payment required", into every webpage and feed wouldn't feel very doable to me. I guess most people are like that. 

But I know from experience that I don't mind paying for good content after enjoying it. I used to buy videotapes and DVDs (yes, kids, I am that old) because I had seen a movie and liked it. I listen to and pay for podcasts that have a value4value tag in their RSS feed, which at 10 sats/minute means that an hour of audio will cost me about $0.16.

Not only that, but I almost daily read comments or see memes on Nostr that I like enough to tip the writer/maker. I've set my tipping (called "zap") limit to somewhere between 3500-4000 sats, so basically an excellent comment will get about $1 from me. 

One US Dollar

$1 is not a lot of money, not even enough to buy a coffee in most countries. But now imagine that many people liked the same text or picture because it was really that interesting. Imagine a lot of these folks tipping (zapping) as generous as I do. What happens then is that some of these writers all for a sudden can make a decent extra income from their hobby. 

There are multiple competing and non-compatible ways of paying over the internet. The easiest to set up that I've found was the Payment Widget made By René Aaron. I've made one here to try it out for myself.





How to thank the creator of the link, as well as the creator of the content?

This is a great way to pay the creator of the content. But how do you pay the person who pointed you to the interesting link? Do you go back in your browser to tip them? Do you even think about that, every? I guess not. Wouldn't it be great if it was a convention to write links as follows:

<p>My favourite website is  <a tip="tip-to-jan@janromme.com" href="https://somesite.com">this website</a>.</p>

There are already other kinds of data that we can put in an HTML link. For example, there are a bunch of relationship statuses (like author, source, external) that we can put in the REL attribute. What if there was a TIP attribute as well, where someone could simply add their lightning address to receive tips for the trouble of linking you to this useful content?

Maybe the internet wouldn't be so riddled with advertisements then 🤔


You can discuss this blog post on Hacker News or Stacker News

Wednesday, May 24, 2023

Let's uproot the dating market with an open-source and open protocols app.


 

The way I see it, there are multiple problems with dating apps:

 

1.  The chicken-and-egg problem of dating apps: Why would I sign up for this app? It looks empty, it doesn’t have enough users in my area. If everyone feels this way, no-one will start signing up. 

2.  Another concern with dating apps is that it’s mostly men that make an account on these apps. 

3.  Maybe women on these platforms feel easily attacked or threatened by too much attention? Some men apparently are misogynous. 

 

Who will build a NOSTR based dating app that addresses these difficulties?

 

Features should include:

 

1.       Easily on-board: easily create a #nostr account. A user doesn't need to know the protocol is nostr though, this is only a technicality that makes it straightforward to build this app quickly.  

2.       Many specific filtering options, build in a way that doesn’t damage your privacy. I want to select a person based on location and religion, for example, without having the whole world being able to read my location and religion on my open access #nostr profile. This is a tough one to crack, I think. 

3.       We only want serious people, so fidelity bods should be an option. Or maybe simply paying for a "nostr-dating.com"relay is all we need. Perhaps $5 a week, for example? 

4.       Furthermore, when mistreated, I, as a male or female, should be able to downvote another user. After a certain threshold of downvotes, they become visible and other users can choose to preemptively block or ignore the misbehaving user. These votes should be confidential and not be traceable back to a specific user. 

5.       #nostr by default has no way to save photos or videos. But for prospects to see each other, those are essential. Possibly, said subscription to a “dating-relay” that only relays messages that contain #singleandreadytomingle should also provide temporary hosting services. 

6.       Added bonus: once you do find someone and stop paying the relay the weekly fee, it deletes all your messages and photos and videos.

7.       Private DMs are indispensable as well.

 

What else am I missing?

 

Let’s uproot the damaging multi-billion dollar “dating app market” with open-source and open protocol tools ✌️✊

 

You can discuss this blog post on Hacker News or Stacker News


If you enjoy reading Jan's tech talk, why not subscribe to my RSS feed?


Saturday, May 20, 2023

A chatGPT/Forever Voices/Midjourney horror story

 A chatGPT/Forever Voices/Midjourney horror story:

 

  1. Used Midjourney to create numerous sassy pictures, all of one woman.
  2. Create a good backstory, thanks to ChatGPT.
  3. Create a virtual persona, “Her” (movie from 2013) or "S1mone" (2003 movie) style, by prompt-engineering some boundaries in chatGPT.
  4. Give her a soul rattling sweet voice by paying a nubile teen to “lend” her voice to your project. Use Forever Voices to clone this voice.
  5. Let lonely guys chat with her for $1 per minute. To keep them satisfied, the AI model sends the man one of the photos sporadically.
  6. $$$ rains in.
  7. These men had little self-worth and self-esteem. They now sink to new lows, by paying not for the attention of a real person but something even lower.
  8. Guy now feels even worse than before. 

How wins in this scenario?

 

Nobody: 

  • Men are worse off because their self-esteem and that of their peers goes down the drain. A quick-fix can be easily found, instead of putting time and effort into the hard work of working on yourself: your manners and grooming and real social skills. 
  • Women have now even more socially awkward men around. Moreover, these men now want to put in even less effort because a cute chick can be “had" for only $1 a day/week. Additionally, they are pressured even more into being even prettier because the competition is now a “perfect” Midjourney puppet.

 

None of this is impossible today, and in fact, it is already happening. Read about CarynAI here: https://nypost.com/2023/05/16/i-went-on-a-date-with-chatgpts-carynai/.


Want to discuss this short horror story? Post your thoughts in this thread on Hacker News.



You might have noticed I stuffed some Amazon Affiliate links in this post? That's on purpose. I hope to monetize this blog. I hope you don't mind me earning an honest living?



Thursday, May 18, 2023

Widely used chemical strongly linked to Parkinson’s disease | Science | AAAS


A groundbreaking epidemiological study has produced the most compelling evidence yet that exposure to the chemical solvent trichloroethylene (TCE)—common in soil and groundwater—increases the risk of developing Parkinson's disease. The movement disorder afflicts about 1 million Americans, and is likely the fastest growing neurodegenerative disease in the world; its global prevalence has doubled in the past 25 years.

The report, published today in JAMA Neurologynone, involved examining the medical records of tens of thousands of Marine Corps and Navy veterans who trained at Marine Corps Base Camp Lejeune in North Carolina from 1975 to 1985. Those exposed there to water heavily contaminated with TCE had a 70% higher risk of developing Parkinson's disease decades later compared with similar veterans who trained elsewhere. The Camp Lejeune contingent also had higher rates of symptoms such as erectile dysfunction and loss of smell that are early harbingers of Parkinson's, which causes tremors; problems with moving, speaking, and balance; and in many cases dementia. Swallowing difficulties often lead to death from pneumonia.

About 90% of Parkinson's cases can't be explained by genetics, but there have been hints that exposure to TCE may trigger it. The new study, led by researchers at the University of California, San Francisco (UCSF), represents by far the strongest environmental link between TCE and the disease. Until now, the entire epidemiological literature included fewer than 20 people who developed Parkinson's after TCE exposure.


What I find most shocking is that all chemicals from the pre-EPA era are simply "grandfathered" into the system. In other words: company that have made dangerous chemicals for decades don't need to prove to the government agencies that the chemicals in question are safe, because they've been used for so long already. 

Can you imagine what would happen if all these thousands of known compounds would have to prove their safety? How many chemicals would we still allow in our food, clothing, electronic products, beauty products and our air, ground and water? How much economic "damage" would be done in the short turn if we became environmentally prudent? And how much human suffering would be prevented in the long run if we did?


A Vision for Nostr. Coracle blog.


I love open protocols. Just the fact that anyone can build on them has a 
magical feel to it.

 

What would the world look like if the World Wide Web and the internet, with all its open protocols, did not exist? We would be far more dependent on big corporations, and their willingness to let us into their walled data storage. It would be much pricier to communicate with other people, and our communications would not and could not be private.

 

Many blogs, early forms of "social media," were built on open protocols. Anyone could use an RSS reader to subscribe to the blog, and read it on a platform, device, or app of their own choosing.

 

Then, Big Money marched in: Myspace, The Facebook, Twitter, YouTube and Instagram are all still built on open protocols like HTTP, HTML and CSS, but now, you have to sign up first, and then log in, to be allowed to view other people's text, photo's and video clips.

 

Ugly walls rose in our beautiful garden, the open internet.

 

The monetization of our eyeballs, the scheme where they make money by showing us targeted advertising, became a multi-billion dollar business.

 

Don't get me wrong: If you feel happy making the Meta (that is Facebook, Instagram, WhatsApp) shareholders happy and in so doing, helping the open internet die, then certainly, please continue to use their apps. I do as well. Not because I want to, but because that's where all my friends are. After all, where else, could they go?

 

Even if you wanted to switch from Instagram or Twitter to an open alternative, where would you go? Any new app that comes along feels as scary as a large empty playground. 

 

The beauty of building new platforms on existing open protocols is that from day 1 of your new app, you immediately have millions of users. If I launched a new email client today, how many people could I potentially reach with that client? I guess about 4 billion, right? Now if I launched a new closed social media platform, let's say a new Twitter that you had to sign up for to be allowed to use it, then how many users would I have from day one? Exactly, zero. Zero! That is the true difference between open and closed platforms.

 

NOSTR = Notes and Other Stuff Transmitted by Relays.

 

I am currently very positive about Nostr, an open protocol that allows anyone to build a number of different social media "platforms" on top of it.



  • Twitter clone, but without the blue-check madness? Damus.io to the rescue: http://damus.io

 

I ran into this interesting blog post by "Coracle", another platform that's build on the open Nostr protocol:


Source: https://blog.coracle.social/a-vision-for-nostr.html

If I had to sum up what the core principle of Nostr is, I would say "individual sovereignty". Nostr is a social experiment that asks people to take responsibility for what they say (and sell, host, publish, promote). This topic has been explored ad nauseum by better writers than I using Bitcoin as a vehicle, so I'll avoid re-treading the same ground if I can, except to point out that the two key design decisions of the Nostr protocol, self-custody of keys and hosting spread across multiple relays, simultaneously entrust control to users and revoke certain entitlements users are accustomed to.


Many users including myself have experienced a 10x or more increase in engagement, despite a much smaller number of people on the platform. This is of course likely due not only to the lack of an algorithm, but also to the lack of celebrities, which tend to absorb attention, leaving little for the rest of us. But for now, Nostr is for the plebs.



Most people I know don't have Twitter accounts, or use them. For them, Twitter is about as relevant to their lives as CNN. Instead, they use private Facebook groups to arrange babysitters for their kids, or Cragislist to buy and sell local goods. They use Google maps to find reviews for nearby businesses, and the church email list to keep up with prayer requests. They subscribe to newsletters their friends publish, and spend their days at work sending memes over Slack. The common theme here is that all these platforms connect "us" with "mine", not with "them". And yes, journalism and topical interest ala Reddit is a part of this, but for normal people, a vanishingly small part. But let's stop squawking about "echo chambers".


Keep dreaming kids, and keep building.




Monday, May 15, 2023

How to use ChatGPT realistically to do daily tasks faster.



How to use ChatGPT realistically to do daily tasks faster.

This blog post is entirely written by a meat bag (that is Bender talk for 'human'). 👋

The situation.

My morning routine sometimes involves scrolling through Hacker News, and this morning I ran into a person's side project called VORE that involves making an RSS reader, not as an app, but as a webpage.

The neat thing of having an RSS reader inside a webpage is that you can access it anywhere. 

  • There is no need for an app on your phone or pc. You simply visit the URL, and there is your curated list of stuff you want to read that day. 
  • Another neat feature of simply having a URL as your personal RSS feed is that it's now trivially easy to share this URL with a friend and thereby giving them access to your entire catalog.
  • The way VORE works is that the URL is simply your username, so in my case I picked the meaningless username HJ56a, so now my personal RSS feed can be viewed by the whole world by visiting https://vore.website/HJ56a

[Insert This Is The Way meme here]

The problem.

To import my list of RSS feeds that I am currently subscribed to, I had to export them first from NetNewsWire (I highly recommend it if you are on macOS or iOS). 

NetNewsWire exports to an OPML file. Vore.website doesn't import OPML files. I was stuck.

ChatGPT to the rescue?

So, I need to convert an OPML file to a list of bare RSS feeds (that is, a URL that leads to a .RSS or .XML or .ATOM page).

I remembered that I forked over $25 to OpenAI this month to try out ChatGPT4. So, I logged in and asked this:

I have an OPML file here. Do you know what that is?

The response:

Yes, I do. OPML (Outline Processor Markup Language) is an XML format for outlines (defined as "trees, where the nodes contain strings"). It was originally developed by UserLand Software for use with their Radio UserLand product. OPML files are used to exchange information between different types of applications such as RSS readers, mind mapping tools, and note-taking apps.

In the context of RSS feed readers, OPML files are commonly used to import and export subscriptions. This allows users to easily migrate their feeds between different platforms. Each entry typically includes the title of the feed, its URL, and sometimes additional metadata.

Here's a simple example of what an OPML file might look like:

<?xml version="1.0" encoding="UTF-8"?>  <opml version="1.0">      <head>          <title>My Subscriptions</title>      </head>      <body>          <outline text="Example Feed" title="Example Feed" type="rss" xmlUrl="http://www.example.com/rss"/>      </body>  </opml>

In this example, there's a single RSS feed titled "Example Feed" located at "http://www.example.com/rss".

Okay, so ChatGPT at least knows what an OPML file is, this is good to know.

Reader, be warned: here be dragons 🐉

The difficulty with ChatGPT is that you have to ✨know what to ask for✨, before asking it. It's true that chatGPT4 is now more able than previous versions to come up with stuff you didn't specify, but it's still very useful to be able to steer the conversation, as this example will highlight.

You see, ChatGPT is not really intelligent at all, it is simply a slightly more dandy sounding Google or DuckDuckGo. It doesn't have true creativity, but it's an index, if you will, of previous human ideas. As Ted Chiang (yes, the gentleman of 'Story of our Lives'/'Arrival' fame) said recently, AI presently gives a low-resolution image, or "blurry JPEGs of the web". 

This, incidentally, is also a side-plot in Guardians of the Galaxy vol. #3, but I digress.

Okay, so I know deep in my bones that there are a plethora of command line tools that have been perfected somewhere in the 70s or 80s and have since gone the way of all things beautiful and useful. They have been forgotten. You see, every generation thinks it knows best and therefore should reinvent stuff that has already reached peak-performance. Because I, however, read the classics, (the grey beards' comments on Hacker News) I happen to know of command line tools like AWKSEDGREP and CURL, to name a few. I just don't have a clue how to use them. 

This leads us to my next question to ChatGPT.

O wizard, conjure me a script? 🧙‍♂️

My next question to chatGPT was:

Can you write a simple script for me, maybe in BASH, that takes this one OPML file as an input and strips all data except for the htmlUrl part, so that afterward I just have a list of URLs left?

What I expected was some AWK, SED, GREP magic. What I got was this:

Sure, here is a simple BASH script using xmlstarlet, a command line XML toolkit, to parse the OPML file and extract htmlUrl attributes.

#!/bin/bash    # Check if file is provided as argument  if [ $# -eq 0 ]  then      echo "No arguments supplied. Please provide the OPML file as an argument."      exit 1  if    # Check if xmlstarlet is installed  if ! [ -x "$(command -v xmlstarlet)" ]; then    echo 'Error: xmlstarlet is not installed.' >&2    exit 1  fi    # Process the OPML file to extract htmlUrl  xmlstarlet sel -t -v "//outline/@htmlUrl" -n $1

Save this script in a file, make it executable using chmod +x scriptname.sh and then you can run it with the OPML file as an argument like ./scriptname.sh file.opml.

This script checks if the OPML file is provided as an argument and if xmlstarlet is installed in the system. If both conditions are met, it will process the OPML file and print all htmlUrl values, one per line.

Make sure to install xmlstarlet if it's not already installed in your system. You can install it using package managers like apt for Ubuntu (with sudo apt install xmlstarlet) or brew for macOS (with brew install xmlstarlet).

I had before this time never heard of the little and free tool names xmlstarlet, but it sounds like it's not hallucinated by chatGPT, so I ran the command  brew install xmlstarlet in my Mac terminal.

Next, I copied that little script and ran the CHMOD command. Again, I never knew you needed to give a script a +x argument (x for executable, maybe?). 

Next, I ran the command in my terminal, and lo and behold, it actually worked!

So, I copied this list of RSS feed URL's into VORE, and was disappointed that the URLs are all wrong 🥲

We don't need htmlUrl

This is where a non-technical person might have given up. But not you and me, no!

I simply assumed VORE isn't intelligent enough to go and visit all the URL's that I pasted into the box to fetch the exact RSS feed URLs. So, I opened the OPML file to find out what it actually looks like. 

As we can see here, there are two kinds of elements: htmlUrl and xmlUrl. And I just extracted precisely the wrong one.

To see if ChatGPT could help my select the right element, I then asked this:

Thanks a lottle. Can you change the script a little bit, so that it doesn't list the htmlURL elements, but rather the xmlUrl elements?

The resulting response made perfect sense. It told me to change the bash script file a bit into:

xmlstarlet sel -t -v "//outline/@xmlUrl" -n $1 

After pasting this new list into VORE I found out yet another problem. There are a bunch of Twitter URLs in the list. This made sense for NetNewsWire because until recently, there was a plug-in that would turn a Twitter feed into content for NetNewsWire, but now that most of the Twitter APIs are closed off, this no longer works. I just had never cleaned up my feed list in NetNewsWire. 

I wasn't about to do so now because deleting the source also deletes all bookmarks from that source in this app, and I don't want to lose those just yet. I could go the tedious rout where I 1. get the Nitter.net URLs for these Twitter URLs 2. and then find the old posts that I bookmarked, 3. and then bookmark them again. 

But. I'm counting on Nitter being blocked some time soon as well, so why waste the energy?

Anyway, now I needed to clean up this list, to remove all Twitter related URLs, so I turned to ChatGPT one last time:

can you add one more feature to this script? If the resulting output is simple a domain like https://twitter.com/, then I don't want to see it. So I want to filter them out

The output this time did give me a reference to a tool that I know of. Remember in the beginning where I bragged about having heard of GREP? Well, here it turns up:

Certainly. We can use a simple filtering mechanism with grep to filter out the lines that end with a domain. Here's the updated script:

#!/bin/bash    # Check if file is provided as an argument  if [ $# -eq 0 ]  then      echo "No arguments supplied. Please provide the OPML file as an argument."      exit 1  fi    # Check if xmlstarlet is installed  if ! [ -x "$(command -v xmlstarlet)" ]; then    echo 'Error: xmlstarlet is not installed.' >&2    exit 1  fi    # Process the OPML file to extract xmlUrl and filter out base domains  xmlstarlet sel -t -v "//outline/@xmlUrl" -n $1 | *grep* -vE "^(http|https)://[^/]+/?$"

The grep -vE "^(http|https)://[^/]+/?$" command is used to filter out any lines that match the regular expression pattern, which in this case is a pattern matching a simple domain like https://twitter.com/ or http://example.com. The -v option in grep is for inverting the match, meaning it will exclude lines that match the pattern rather than including them. 

So, the final version of my script now reads in the last line:

xmlstarlet sel -t -v "//outline/@xmlUrl" -n $1 | grep -vE "https://twitter.com"

The resulting list in my terminal windows was everything that I wished for. After pasting this list into VORE, I now am the proud owner of a functional RSS feed on the web. 😄


If you want to play around with this OPML to RSS feed URLs script, here it is:


You might have noticed I stuffed some Amazon Affiliate links in this post? That's on purpose. I hope to monetize this blog. I hope you don't mind me earning an honest living?

To discuss this article, you can comment at HN: https://news.ycombinator.com/item?id=35949269