December 15, 2019

Using Newsblur's API

Everybody should have a feedreader for RSS feeds. Lately I decided to subscribe to some feeds of local organisations (like the zoo, some local fablabs or the nearest police station). Things that take place nearby are more interesting after all.

So yesterday I found that our city administration provides some RSS-feeds - many actually. They are all listed on this website. So I want to subscribe to all of them. I didn’t find an easy way to subscribe to many feeds at once through the Newsblur web interface, so I decided it was a good day to test out the Newsblur api. (While the API is well documented, I found that I missed having some examples to play around with. So here is my example.)

For authentication I got my session coockie like this:

curl -X POST \
    --cookie-jar session.newsblur \
    --data "username=$MY_USERNAME" \
    --data "password=$MY_PASSWORD" \
    https://www.newsblur.com/api/login

After that, the file session.newsblur contains something that looks like a sessionid:

#HttpOnly_.newsblur.com TRUE / FALSE 1601234874 newsblur_sessionid xa3w22sometokenspecifictome6duds

As a test I tried reading a list of all my feeds (the json_pp is just for formating):

curl -X GET \
    --cookie session.newsblur \
    "https://www.newsblur.com/reader/feeds" \
    | json_pp

This works just fine and contains information about my personal feeds, so we know I did the authentication right.

Next I want to subscribe to a single the feed (https://www.nuernberg.de/internet/newsfeed/stadtportal_ausstellungen.xml, taken from the site mentioned above) and add it in the folder regional. The documentation for this command tells us we will need to add the two arguments as url parameters. Since the url contains suspicious caracters like :, ., / and \ (that are likely to break things) I will use the curl flag --data-urlencode:

curl -X POST \
    --cookie session.newsblur \
    --data-urlencode "url=https://www.nuernberg.de/internet/newsfeed/stadtportal_ausstellungen.xml" \
    --data-urlencode "folder=regional" \
    "https://www.newsblur.com/reader/add_url"

The output again is some json, telling my information about the newly subscribed feed:

    {
    ...
      "subs" : 1,
      "feed_link" : "https://www.nuernberg.de/internet/newsfeed/Stadtportal - Ausstellungen",
      "id" : 7686975,
      "min_to_decay" : 240,
      "updated_seconds_ago" : 15878,
      "stories_last_month" : 6,
      "favicon_text_color" : "white",
      "fetched_once" : true,
      "search_indexed" : true,
      "last_story_seconds_ago" : 541185,
    ...
    }

A look at the Newsblur’s website tells us that we really did subscribe to the feed and it was placed in the correct folder. So we are nearly there! Let’s get a list of all the feeds that are linked in the site of the city administration:

lynx -dump -listonly "https://www.nuernberg.de/internet/stadtportal/feed.html" \
    | grep xml | awk '{print $2}' > feeds_to_subscribe_to.txt

We get a file with one feed address per line (36 in sum). Now all that is left is to loop over them and subscribe to them one by one.

cat feeds_to_subscribe_to.txt | while read feed
    do
        curl -X POST \
            --cookie session.newsblur \
            --data-urlencode "url=$feed" \
            --data-urlencode "folder=regional" \
            "https://www.newsblur.com/reader/add_url"
        sleep 1 # don't spam the api
    done

And there we go: We subscribed to each of the feeds listed on the website.

I didn’t have much opportunity to work with APIs, yet. But it was fun to play around with this one, there is nothing to be afraid of.