Petter Reinholdtsen

New and improved sqlcipher in Debian for accessing Signal database
12th November 2023

For a while now I wanted to have direct access to the Signal database of messages and channels of my Desktop edition of Signal. I prefer the enforced end to end encryption of Signal these days for my communication with friends and family, to increase the level of safety and privacy as well as raising the cost of the mass surveillance government and non-government entities practice these days. In August I came across a nice recipe on how to use sqlcipher to extract statistics from the Signal database explaining how to do this. Unfortunately this did not work with the version of sqlcipher in Debian. The sqlcipher package is a "fork" of the sqlite package with added support for encrypted databases. Sadly the current Debian maintainer announced more than three years ago that he did not have time to maintain sqlcipher, so it seemed unlikely to be upgraded by the maintainer. I was reluctant to take on the job myself, as I have very limited experience maintaining shared libraries in Debian. After waiting and hoping for a few months, I gave up the last week, and set out to update the package. In the process I orphaned it to make it more obvious for the next person looking at it that the package need proper maintenance.

The version in Debian was around five years old, and quite a lot of changes had taken place upstream into the Debian maintenance git repository. After spending a few days importing the new upstream versions, realising that upstream did not care much for SONAME versioning as I saw library symbols being both added and removed with minor version number changes to the project, I concluded that I had to do a SONAME bump of the library package to avoid surprising the reverse dependencies. I even added a simple autopkgtest script to ensure the package work as intended. Dug deep into the hole of learning shared library maintenance, I set out a few days ago to upload the new version to Debian experimental to see what the quality assurance framework in Debian had to say about the result. The feedback told me the pacakge was not too shabby, and yesterday I uploaded the latest version to Debian unstable. It should enter testing today or tomorrow, perhaps delayed by a small library transition.

Armed with a new version of sqlcipher, I can now have a look at the SQL database in ~/.config/Signal/sql/db.sqlite. First, one need to fetch the encryption key from the Signal configuration using this simple JSON extraction command:

/usr/bin/jq -r '."key"' ~/.config/Signal/config.json

Assuming the result from that command is 'secretkey', which is a hexadecimal number representing the key used to encrypt the database. Next, one can now connect to the database and inject the encryption key for access via SQL to fetch information from the database. Here is an example dumping the database structure:

% sqlcipher ~/.config/Signal/sql/db.sqlite
sqlite> PRAGMA key = "x'secretkey'";
sqlite> .schema
CREATE TABLE sqlite_stat1(tbl,idx,stat);
CREATE TABLE conversations(
      id STRING PRIMARY KEY ASC,
      json TEXT,

      active_at INTEGER,
      type STRING,
      members TEXT,
      name TEXT,
      profileName TEXT
    , profileFamilyName TEXT, profileFullName TEXT, e164 TEXT, serviceId TEXT, groupId TEXT, profileLastFetchedAt INTEGER);
CREATE TABLE identityKeys(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE items(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE sessions(
      id TEXT PRIMARY KEY,
      conversationId TEXT,
      json TEXT
    , ourServiceId STRING, serviceId STRING);
CREATE TABLE attachment_downloads(
    id STRING primary key,
    timestamp INTEGER,
    pending INTEGER,
    json TEXT
  );
CREATE TABLE sticker_packs(
    id TEXT PRIMARY KEY,
    key TEXT NOT NULL,

    author STRING,
    coverStickerId INTEGER,
    createdAt INTEGER,
    downloadAttempts INTEGER,
    installedAt INTEGER,
    lastUsed INTEGER,
    status STRING,
    stickerCount INTEGER,
    title STRING
  , attemptedStatus STRING, position INTEGER DEFAULT 0 NOT NULL, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync
      INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE stickers(
    id INTEGER NOT NULL,
    packId TEXT NOT NULL,

    emoji STRING,
    height INTEGER,
    isCoverOnly INTEGER,
    lastUsed INTEGER,
    path STRING,
    width INTEGER,

    PRIMARY KEY (id, packId),
    CONSTRAINT stickers_fk
      FOREIGN KEY (packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE sticker_references(
    messageId STRING,
    packId TEXT,
    CONSTRAINT sticker_references_fk
      FOREIGN KEY(packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE emojis(
    shortName TEXT PRIMARY KEY,
    lastUsage INTEGER
  );
CREATE TABLE messages(
        rowid INTEGER PRIMARY KEY ASC,
        id STRING UNIQUE,
        json TEXT,
        readStatus INTEGER,
        expires_at INTEGER,
        sent_at INTEGER,
        schemaVersion INTEGER,
        conversationId STRING,
        received_at INTEGER,
        source STRING,
        hasAttachments INTEGER,
        hasFileAttachments INTEGER,
        hasVisualMediaAttachments INTEGER,
        expireTimer INTEGER,
        expirationStartTimestamp INTEGER,
        type STRING,
        body TEXT,
        messageTimer INTEGER,
        messageTimerStart INTEGER,
        messageTimerExpiresAt INTEGER,
        isErased INTEGER,
        isViewOnce INTEGER,
        sourceServiceId TEXT, serverGuid STRING NULL, sourceDevice INTEGER, storyId STRING, isStory INTEGER
        GENERATED ALWAYS AS (type IS 'story'), isChangeCreatedByUs INTEGER NOT NULL DEFAULT 0, isTimerChangeFromSync INTEGER
        GENERATED ALWAYS AS (
          json_extract(json, '$.expirationTimerUpdate.fromSync') IS 1
        ), seenStatus NUMBER default 0, storyDistributionListId STRING, expiresAt INT
        GENERATED ALWAYS
        AS (ifnull(
          expirationStartTimestamp + (expireTimer * 1000),
          9007199254740991
        )), shouldAffectActivity INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), shouldAffectPreview INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), isUserInitiatedMessage INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'group-v2-change',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), mentionsMe INTEGER NOT NULL DEFAULT 0, isGroupLeaveEvent INTEGER
        GENERATED ALWAYS AS (
          type IS 'group-v2-change' AND
          json_array_length(json_extract(json, '$.groupV2Change.details')) IS 1 AND
          json_extract(json, '$.groupV2Change.details[0].type') IS 'member-remove' AND
          json_extract(json, '$.groupV2Change.from') IS NOT NULL AND
          json_extract(json, '$.groupV2Change.from') IS json_extract(json, '$.groupV2Change.details[0].aci')
        ), isGroupLeaveEventFromOther INTEGER
        GENERATED ALWAYS AS (
          isGroupLeaveEvent IS 1
          AND
          isChangeCreatedByUs IS 0
        ), callId TEXT
        GENERATED ALWAYS AS (
          json_extract(json, '$.callId')
        ));
CREATE TABLE sqlite_stat4(tbl,idx,neq,nlt,ndlt,sample);
CREATE TABLE jobs(
        id TEXT PRIMARY KEY,
        queueType TEXT STRING NOT NULL,
        timestamp INTEGER NOT NULL,
        data STRING TEXT
      );
CREATE TABLE reactions(
        conversationId STRING,
        emoji STRING,
        fromId STRING,
        messageReceivedAt INTEGER,
        targetAuthorAci STRING,
        targetTimestamp INTEGER,
        unread INTEGER
      , messageId STRING);
CREATE TABLE senderKeys(
        id TEXT PRIMARY KEY NOT NULL,
        senderId TEXT NOT NULL,
        distributionId TEXT NOT NULL,
        data BLOB NOT NULL,
        lastUpdatedDate NUMBER NOT NULL
      );
CREATE TABLE unprocessed(
        id STRING PRIMARY KEY ASC,
        timestamp INTEGER,
        version INTEGER,
        attempts INTEGER,
        envelope TEXT,
        decrypted TEXT,
        source TEXT,
        serverTimestamp INTEGER,
        sourceServiceId STRING
      , serverGuid STRING NULL, sourceDevice INTEGER, receivedAtCounter INTEGER, urgent INTEGER, story INTEGER);
CREATE TABLE sendLogPayloads(
        id INTEGER PRIMARY KEY ASC,

        timestamp INTEGER NOT NULL,
        contentHint INTEGER NOT NULL,
        proto BLOB NOT NULL
      , urgent INTEGER, hasPniSignatureMessage INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE sendLogRecipients(
        payloadId INTEGER NOT NULL,

        recipientServiceId STRING NOT NULL,
        deviceId INTEGER NOT NULL,

        PRIMARY KEY (payloadId, recipientServiceId, deviceId),

        CONSTRAINT sendLogRecipientsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE sendLogMessageIds(
        payloadId INTEGER NOT NULL,

        messageId STRING NOT NULL,

        PRIMARY KEY (payloadId, messageId),

        CONSTRAINT sendLogMessageIdsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE preKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE signedPreKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE badges(
        id TEXT PRIMARY KEY,
        category TEXT NOT NULL,
        name TEXT NOT NULL,
        descriptionTemplate TEXT NOT NULL
      );
CREATE TABLE badgeImageFiles(
        badgeId TEXT REFERENCES badges(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        'order' INTEGER NOT NULL,
        url TEXT NOT NULL,
        localPath TEXT,
        theme TEXT NOT NULL
      );
CREATE TABLE storyReads (
        authorId STRING NOT NULL,
        conversationId STRING NOT NULL,
        storyId STRING NOT NULL,
        storyReadDate NUMBER NOT NULL,

        PRIMARY KEY (authorId, storyId)
      );
CREATE TABLE storyDistributions(
        id STRING PRIMARY KEY NOT NULL,
        name TEXT,

        senderKeyInfoJson STRING
      , deletedAtTimestamp INTEGER, allowsReplies INTEGER, isBlockList INTEGER, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER);
CREATE TABLE storyDistributionMembers(
        listId STRING NOT NULL REFERENCES storyDistributions(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        serviceId STRING NOT NULL,

        PRIMARY KEY (listId, serviceId)
      );
CREATE TABLE uninstalled_sticker_packs (
        id STRING NOT NULL PRIMARY KEY,
        uninstalledAt NUMBER NOT NULL,
        storageID STRING,
        storageVersion NUMBER,
        storageUnknownFields BLOB,
        storageNeedsSync INTEGER NOT NULL
      );
CREATE TABLE groupCallRingCancellations(
        ringId INTEGER PRIMARY KEY,
        createdAt INTEGER NOT NULL
      );
CREATE TABLE IF NOT EXISTS 'messages_fts_data'(id INTEGER PRIMARY KEY, block BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_idx'(segid, term, pgno, PRIMARY KEY(segid, term)) WITHOUT ROWID;
CREATE TABLE IF NOT EXISTS 'messages_fts_content'(id INTEGER PRIMARY KEY, c0);
CREATE TABLE IF NOT EXISTS 'messages_fts_docsize'(id INTEGER PRIMARY KEY, sz BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_config'(k PRIMARY KEY, v) WITHOUT ROWID;
CREATE TABLE edited_messages(
        messageId STRING REFERENCES messages(id)
          ON DELETE CASCADE,
        sentAt INTEGER,
        readStatus INTEGER
      , conversationId STRING);
CREATE TABLE mentions (
        messageId REFERENCES messages(id) ON DELETE CASCADE,
        mentionAci STRING,
        start INTEGER,
        length INTEGER
      );
CREATE TABLE kyberPreKeys(
        id STRING PRIMARY KEY NOT NULL,
        json TEXT NOT NULL, ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE callsHistory (
        callId TEXT PRIMARY KEY,
        peerId TEXT NOT NULL, -- conversation id (legacy) | uuid | groupId | roomId
        ringerId TEXT DEFAULT NULL, -- ringer uuid
        mode TEXT NOT NULL, -- enum "Direct" | "Group"
        type TEXT NOT NULL, -- enum "Audio" | "Video" | "Group"
        direction TEXT NOT NULL, -- enum "Incoming" | "Outgoing
        -- Direct: enum "Pending" | "Missed" | "Accepted" | "Deleted"
        -- Group: enum "GenericGroupCall" | "OutgoingRing" | "Ringing" | "Joined" | "Missed" | "Declined" | "Accepted" | "Deleted"
        status TEXT NOT NULL,
        timestamp INTEGER NOT NULL,
        UNIQUE (callId, peerId) ON CONFLICT FAIL
      );
[ dropped all indexes to save space in this blog post ]
CREATE TRIGGER messages_on_view_once_update AFTER UPDATE ON messages
      WHEN
        new.body IS NOT NULL AND new.isViewOnce = 1
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
      END;
CREATE TRIGGER messages_on_insert AFTER INSERT ON messages
      WHEN new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_delete AFTER DELETE ON messages BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        DELETE FROM sendLogPayloads WHERE id IN (
          SELECT payloadId FROM sendLogMessageIds
          WHERE messageId = old.id
        );
        DELETE FROM reactions WHERE rowid IN (
          SELECT rowid FROM reactions
          WHERE messageId = old.id
        );
        DELETE FROM storyReads WHERE storyId = old.storyId;
      END;
CREATE VIRTUAL TABLE messages_fts USING fts5(
        body,
        tokenize = 'signal_tokenizer'
      );
CREATE TRIGGER messages_on_update AFTER UPDATE ON messages
      WHEN
        (new.body IS NULL OR old.body IS NOT new.body) AND
         new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_insert_insert_mentions AFTER INSERT ON messages
      BEGIN
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
CREATE TRIGGER messages_on_update_update_mentions AFTER UPDATE ON messages
      BEGIN
        DELETE FROM mentions WHERE messageId = new.id;
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
sqlite>

Finally I have the tool needed to inspect and process Signal messages that I need, without using the vendor provided client. Now on to transforming it to a more useful format.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, sikkerhet, surveillance.
New chrpath release 0.17
10th November 2023

The chrpath package provide a simple command line tool to remove or modify the rpath or runpath of compiled ELF program. It is almost 10 years since I updated the code base, but I stumbled over the tool today, and decided it was time to move the code base from Subversion to git and find a new home for it, as the previous one (Debian Alioth) has been shut down. I decided to go with Codeberg this time, as it is my git service of choice these days, did a quick and dirty migration to git and updated the code with a few patches I found in the Debian bug tracker. These are the release notes:

New in 0.17 released 2023-11-10:

The latest edition is tagged and available from https://codeberg.org/pere/chrpath.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: chrpath, debian, english.
Test framework for DocBook processors / formatters
5th November 2023

All the books I have published so far has been using DocBook somewhere in the process. For the first book, the source format was DocBook, while for every later book it was an intermediate format used as the stepping stone to be able to present the same manuscript in several formats, on paper, as ebook in ePub format, as a HTML page and as a PDF file either for paper production or for Internet consumption. This is made possible with a wide variety of free software tools with DocBook support in Debian. The source format of later books have been docx via rst, Markdown, Filemaker and Asciidoc, and for all of these I was able to generate a suitable DocBook file for further processing using pandoc, a2x and asciidoctor, as well as rendering using xmlto, dbtoepub, dblatex, docbook-xsl and fop.

Most of the books I have published are translated books, with English as the source language. The use of po4a to handle translations using the gettext PO format has been a blessing, but publishing translated books had triggered the need to ensure the DocBook tools handle relevant languages correctly. For every new language I have published, I had to submit patches dblatex, dbtoepub and docbook-xsl fixing incorrect language and country specific issues in the framework themselves. Typically this has been missing keywords like 'figure' or sort ordering of index entries. After a while it became tiresome to only discover issues like this by accident, and I decided to write a DocBook "test framework" exercising various features of DocBook and allowing me to see all features exercised for a given language. It consist of a set of DocBook files, a version 4 book, a version 5 book, a v4 book set, a v4 selection of problematic tables, one v4 testing sidefloat and finally one v4 testing a book of articles. The DocBook files are accompanied with a set of build rules for building PDF using dblatex and docbook-xsl/fop, HTML using xmlto or docbook-xsl and epub using dbtoepub. The result is a set of files visualizing footnotes, indexes, table of content list, figures, formulas and other DocBook features, allowing for a quick review on the completeness of the given locale settings. To build with a different language setting, all one need to do is edit the lang= value in the .xml file to pick a different ISO 639 code value and run 'make'.

The test framework source code is available from Codeberg, and a generated set of presentations of the various examples is available as Codeberg static web pages at https://pere.codeberg.page/docbook-example/. Using this test framework I have been able to discover and report several bugs and missing features in various tools, and got a lot of them fixed. For example I got Northern Sami keywords added to both docbook-xsl and dblatex, fixed several typos in Norwegian bokmål and Norwegian Nynorsk, support for non-ascii title IDs added to pandoc, Norwegian index sorting support fixed in xindy and initial Norwegian Bokmål support added to dblatex. Some issues still remains, though. Default index sorting rules are still broken in several tools, so the Norwegian letters æ, ø and å are more often than not sorted properly in the book index.

The test framework recently received some more polish, as part of publishing my latest book. This book contained a lot of fairly complex tables, which exposed bugs in some of the tools. This made me add a new test file with various tables, as well as spend some time to brush up the build rules. My goal is for the test framework to exercise all DocBook features to make it easier to see which features work with different processors, and hopefully get them all to support the full set of DocBook features. Feel free to send patches to extend the test set, and test it with your favorite DocBook processor. Please visit these two URLs to learn more:

If you want to learn more on Docbook and translations, I recommend having a look at the the DocBook web site, the DoCookBook site and my earlier blog post on how the Skolelinux project process and translate documentation, a talk I gave earlier this year on how to translate and publish books using free software (Norwegian only).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, docbook, english.
«Virkninger av angrefristloven», hovedfagsoppgaven som fikk endret en lov
29th October 2023

I 1979 leverte Ole-Erik Yrvin en hovedfagsoppgave for Cand. Scient. ved Institutt for sosiologi på Universitetet i Oslo på oppdrag fra Forbruker- og administrasjonsdepartementet. Oppgaven evaluerte Angrefristloven fra 1972, og det han oppdaget førte til at loven ble endret fire år senere.

Jeg har kjent Ole-Erik en stund, og synes det var trist at hans oppgave ikke lenger er tilgjengelig, hverken fra oppdragsgiver eller fra universitetet. Hans forsøk på å få den avbildet og lagt ut på Internett har vist seg fånyttes, så derfor tilbød jeg meg for en stund tilbake å publisere den og gjøre den tilgjengelig med fribruksvilkår på Internett. Det er nå klart, og hovedfagsoppgaven er tilgjengelig blant annet via min liste over publiserte bøker, både som nettside, digital bok i ePub-format og på papir fra lulu.com. Jeg regner med at den også vil dukke opp på nettbokhandlere i løpet av en måned eller to.

Alle tabeller og figurer er gjenskapt for bedre lesbarhet, noen skrivefeil rettet opp og mange referanser har fått flere detaljer som ISBN-nummer og DOI-referanse. Selv om jeg ikke regner med at dette blir en kioskvelter, så håper jeg denne nye utgaven kan komme fremtiden til glede.

Som vanlig, hvis du bruker Bitcoin og ønsker å vise din støtte til det jeg driver med, setter jeg pris på om du sender Bitcoin-donasjoner til min adresse 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b. Merk, betaling med bitcoin er ikke anonymt. :)

Tags: docbook, norsk.
«underordnet tjenestemann blir inhabil fordi en overordnet er inhabil».
7th September 2023

Medlemmene av Norges regjering har demonstert de siste månedene at habilitetsvureringer ikke er deres sterke side og det gjelder både Arbeiderpartiets og Senterpartiers representater. Det er heldigvis enklere i det private, da inhabilitetsreglene kun gjelder de som jobber for folket, ikke seg selv. Sist ut er utenriksminister Huitfeldt. I går kom nyheten om at Riksadvokaten har konkludert med at nestsjefen i Økokrim kan behandle sak om habilitet og innsidekunnskap for Huitfeldt, på tross av at hans overordnede, sjefen for Økokrim, har meldt seg inhabil i saken. Dette er litt rart. I veilednigen «Habilitet i kommuner og fylkeskommuner» av Kommunal- og regionaldepartementet forteller de hva som gjelder, riktig nok gjelder veiledningen ikke for Økokrim som jo ikke er kommune eller fylkeskommune, men jeg får ikke inntrykk av at dette er regler som kun gjelder for kommune og fylkeskommune:

«2.1 Oversikt over inhabilitetsgrunnlagene

De alminnelige reglene om inhabilitet for den offentlige forvaltningen er gitt i forvaltningsloven §§ 6 til 10. Forvaltningslovens hovedregel om inhabilitet framgår av § 6. Her er det gitt tre ulike grunnlag som kan føre til at en tjenestemann eller folkevalgt blir inhabil. I § 6 første ledd bokstavene a til e er det oppstilt konkrete tilknytningsforhold mellom tjenestemannen og saken eller sakens parter som automatisk fører til inhabilitet. Annet ledd oppstiller en skjønnsmessig regel om at tjenestemannen også kan bli inhabil etter en konkret vurdering av inhabilitetsspørsmålet, der en lang rekke momenter kan være relevante. I tredje ledd er det regler om såkalt avledet inhabilitet. Det vil si at en underordnet tjenestemann blir inhabil fordi en overordnet er inhabil.»

Loven sier ganske enkelt «Er den overordnede tjenestemann ugild, kan avgjørelse i saken heller ikke treffes av en direkte underordnet tjenestemann i samme forvaltningsorgan.» Jeg antar tanken er at en underordnet vil stå i fare for å tilpasse sine konklusjoner til det overordnet vil ha fordel av, for å fortsatt ha et godt forhold til sin overordnede. Men jeg er ikke jurist og forstår nok ikke kompliserte juridiske vurderinger. For å sitere «Kamerat Napoleon» av George Orwell: «Alle dyr er like, men noen dyr er likere enn andre».

Tags: norsk.
Invidious add-on for Kodi 20
10th August 2023

I still enjoy Kodi and LibreELEC as my multimedia center at home. Sadly two of the services I really would like to use from within Kodi are not easily available. The most wanted add-on would be one making The Internet Archive available, and it has not been working for many years. The second most wanted add-on is one using the Invidious privacy enhanced Youtube frontent. A plugin for this has been partly working, but not been kept up to date in the Kodi add-on repository, and its upstream seem to have given it up in April this year, when the git repository was closed. A few days ago I got tired of this sad state of affairs and decided to have a go at improving the Invidious add-on. As Google has already attacked the Invidious concept, so it need all the support if can get. My small contribution here is to improve the service status on Kodi.

I added support to the Invidious add-on for automatically picking a working Invidious instance, instead of requiring the user to specify the URL to a specific instance after installation. I also had a look at the set of patches floating around in the various forks on github, and decided to clean up at least some of the features I liked and integrate them into my new release branch. Now the plugin can handle channel and short video items in search results. Earlier it could only handle single video instances in the search response. I also brushed up the set of metadata displayed a bit, but hope I can figure out how to get more relevant metadata displayed.

Because I only use Kodi 20 myself, I only test on version 20 and am only motivated to ensure version 20 is working. Because of API changes between version 19 and 20, I suspect it will fail with earlier Kodi versions.

I already asked to have the add-on added to the official Kodi 20 repository, and is waiting to heard back from the repo maintainers.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, kodi, multimedia, video.
What did I learn from OpenSnitch this summer?
11th June 2023

With yesterdays release of Debian 12 Bookworm, I am happy to know the the interactive application firewall OpenSnitch is available for a wider audience. I have been running it for a few weeks now, and have been surprised about some of the programs connecting to the Internet. Some programs are obviously calling out from my machine, like the NTP network based clock adjusting system and Tor to reach other Tor clients, but others were more dubious. For example, the KDE Window manager try to look up the host name in DNS, for no apparent reason, but if this lookup is blocked the KDE desktop get periodically stuck when I use it. Another surprise was how much Firefox call home directly to mozilla.com, mozilla.net and googleapis.com, to mention a few, when I visit other web pages. This direct connection happen even if I told Firefox to always use a proxy, and the proxy setting is ignored for this traffic. Other surprising connections come from audacity and dirmngr (I do not use Gnome). It took some trial and error to get a good default set of permissions. Without it, I would get popups asking for permissions at any time, also the most inconvenient ones where I am in the middle of a time sensitive gaming session.

I suspect some application developers should rethink when then need to use network connections or DNS lookups, and recommend testing OpenSnitch (only apt install opensnitch away in Debian Bookworm) to locate and report any surprising Internet connections on your desktop machine.

At the moment the upstream developer and Debian package maintainer is working on making the system more reliable in Debian, by enabling the eBPF kernel module to track processes and connections instead of depending in content in /proc/. This should enter unstable fairly soon.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2023-06-12: I got a tip about a list of privacy issues in Free Software and the #debian-privacy IRC channel discussing these topics.

Tags: debian, english, opensnitch.
wmbusmeters, parse data from your utility meter - nice free software
19th May 2023

There is a European standard for reading utility meters like water, gas, electricity or heat distribution meters. The Meter-Bus standard (EN 13757-2, EN 13757-3 and EN 13757–4) provide a cross vendor way to talk to and collect meter data. I ran into this standard when I wanted to monitor some heat distribution meters, and managed to find free software that could do the job. The meters in question broadcast encrypted messages with meter information via radio, and the hardest part was to track down the encryption keys from the vendor. With this in place I could set up a MQTT gateway to submit the meter data for graphing.

The free software systems in question, rtl-wmbus to read the messages from a software defined radio, and wmbusmeters to decrypt and decode the content of the messages, is working very well and allowe me to get frequent updates from my meters. I got in touch with upstream last year to see if there was any interest in publishing the packages via Debian. I was very happy to learn that Fredrik Öhrström volunteered to maintain the packages, and I have since assisted him in getting Debian package build rules in place as well as sponsoring the packages into the Debian archive. Sadly we completed it too late for them to become part of the next stable Debian release (Bookworm). The wmbusmeters package just cleared the NEW queue. It will need some work to fix a built problem, but I expect Fredrik will find a solution soon.

If you got a infrastructure meter supporting the Meter Bus standard, I strongly recommend having a look at these nice packages.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, nice free software.
The 2023 LinuxCNC Norwegian developer gathering
14th May 2023

The LinuxCNC project is making headway these days. A lot of patches and issues have seen activity on the project github pages recently. A few weeks ago there was a developer gathering over at the Tormach headquarter in Wisconsin, and now we are planning a new gathering in Norway. If you wonder what LinuxCNC is, lets quote Wikipedia:

"LinuxCNC is a software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It can control up to 9 axes or joints of a CNC machine using G-code (RS-274NGC) as input. It has several GUIs suited to specific kinds of usage (touch screen, interactive development)."

The Norwegian developer gathering take place the weekend June 16th to 18th this year, and is open for everyone interested in contributing to LinuxCNC. Up to date information about the gathering can be found in the developer mailing list thread where the gathering was announced. Thanks to the good people at Debian, Redpill-Linpro and NUUG Foundation, we have enough sponsor funds to pay for food, and shelter for the people traveling from afar to join us. If you would like to join the gathering, get in touch.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, linuxcnc.
OpenSnitch in Debian ready for prime time
13th May 2023

A bit delayed, the interactive application firewall OpenSnitch package in Debian now got the latest fixes ready for Debian Bookworm. Because it depend on a package missing on some architectures, the autopkgtest check of the testing migration script did not understand that the tests were actually working, so the migration was delayed. A bug in the package dependencies is also fixed, so those installing the firewall package (opensnitch) now also get the GUI admin tool (python3-opensnitch-ui) installed by default. I am very grateful to Gustavo Iñiguez Goya for his work on getting the package ready for Debian Bookworm.

Armed with this package I have discovered some surprising connections from programs I believed were able to work completly offline, and it has already proven its worth, at least to me. If you too want to get more familiar with the kind of programs using Internett connections on your machine, I recommend testing apt install opensnitch in Bookworm and see what you think.

The package is still not able to build its eBPF module within Debian. Not sure how much work it would be to get it working, but suspect some kernel related packages need to be extended with more header files to get it working.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, opensnitch.

RSS feed

Created by Chronicle v4.6