aboutsummaryrefslogtreecommitdiffstats
path: root/mds
diff options
context:
space:
mode:
authorterminaldweller <devi@terminaldweller.com>2024-03-05 19:25:48 +0000
committerterminaldweller <devi@terminaldweller.com>2024-03-05 19:25:48 +0000
commitce567a174835644d29f3b7f4a4626f07dfb05b0a (patch)
tree4ade54ab126046515fda0dc959f6d40bbaa71607 /mds
parentdowngraded mongo to v6 for now (diff)
downloadblog-ce567a174835644d29f3b7f4a4626f07dfb05b0a.tar.gz
blog-ce567a174835644d29f3b7f4a4626f07dfb05b0a.zip
added a new script to conevrt all md to asciidoc, rss feed now sends the entire asciidoc for the teaser
Diffstat (limited to 'mds')
-rw-r--r--mds/DNS.txt243
-rw-r--r--mds/NTP.txt184
-rw-r--r--mds/cstruct2luatable.txt485
-rw-r--r--mds/howtogetyourSMSonIRC.txt228
-rw-r--r--mds/lazymakefiles.txt690
-rw-r--r--mds/oneclientforeverything.txt253
6 files changed, 2083 insertions, 0 deletions
diff --git a/mds/DNS.txt b/mds/DNS.txt
new file mode 100644
index 0000000..d2bc173
--- /dev/null
+++ b/mds/DNS.txt
@@ -0,0 +1,243 @@
+== What to do with your DNS when ODoH’s Trust-Me-Bruh Model doesn’t work for you
+
+DNS. Domain Name System.
+
+We all use it. We all need it. But most people are still using it like
+its the early 2000s. What do I mean by that? Ye good ole UDP on port 53.
+
+And your ISP will tell ya you don’t need to worry about your privacy
+because they swear on boy scout honor that they don’t log your DNS
+queries. Right ….
+
+It’s 2024. We have come a long way. We have DoH, DoT, ODoH, DNSCrypt and
+more.
+
+We’re going to talk about all of these for a little bit and then finally
+I’m going to share what I am doing right now.
+
+=== Problem Statement
+
+Plain jane DNS, i.e., sending your request using UDP without any sort of
+encryption, has been the norm for almost ever. Even right now that is
+what most people are doing. That might have been oh-so-cool in the 80s
+but It doesn’t fly anymore. So we ended up with DoH and DoT.
+DNS-over-HTTPS and DNS-over-TLS. They are both self-explanatory. Instead
+of doing unencrypted requests over UDP, we do a TCP request using HTTPS
+or TLS. So far so good. DoH and DoT are definitely improvements over
+https://www.rfc-editor.org/rfc/rfc1035[RFC 1035] but let’s take a step
+back and see what we are trying to defend against. Without a structure,
+we are not doing much more than just magic granted to us by the flying
+spaghetti monster.
+
+Let’s review our threat model.What are we trying to achieve here? What
+are the threats and who are the threat actors? Who are we safeguarding
+our DNS queries against? Men-in-the-middle? Our internet provider? The
+authoritative DNS server that we use?
+
+*_Statement_*: We want to have a *_private_* and *_anonymous_* DNS
+solution. That means:
+
+*_Requirement 001_*:
+
+* The DNS queries shall only be viewed by the authoritative DNS
+server(We can up this requirement later by running our own authoritative
+DNS server but for now we are going to stick with our current
+requirement).
+
+This naturally means that your internet provider and other
+men-in-the-middle are not allowed to snoop on what we are querying.
+
+*_Requirement 002_*:
+
+* The DNS queries shall be anonymous. This means the authoritative DNS
+server that is getting our DNS queries shall not be able to identify the
+source of the query.
+
+There is more than one way to ``identify'' the source of the query. We
+only mean the source as in the IP address that made the DNS query.
+
+This second requirement is what ODoH is trying to solve. ODoH tries to
+separate the identity of the source of the DNS query from the query
+itself. ODoH stands for oblivous DoH. It add an ``oblivious'' proxy in
+middle of the source of the DNS query and the server. This way the proxy
+can send the queries in bulk for example to try to mask who sent what
+when. I’m summarizing here but what ODoH is trying to do can be
+summarized by this:
+
+* ODoH tries to separate the identity of the source of the query from
+the query itself by adding a proxy in the middle
+
+Below you can see
+
+....
+ --- [ Request encrypted with Target public key ] -->
+ +---------+ +-----------+ +-----------+
+ | Client +-------------> Oblivious +-------------> Oblivious |
+ | <-------------+ Proxy <-------------+ Target |
+ +---------+ +-----------+ +-----------+
+ <-- [ Response encrypted with symmetric key ] ---
+....
+
+https://datatracker.ietf.org/doc/rfc9230/[ripped straight from RFC 9230]
+
+The main problem with this sort of a solution is that there is always an
+element of ``trust-me-bruh'' to the whole situation.
+
+* How can we trust that the proxy provider and the server are not
+colluding?
+
+We could run our own oblivious proxy but then if it’s just you and your
+friends using the proxy, then your proxy is not obfuscating much, is it
+now? And then there is the ``oblivious'' aspect of the solution. How can
+we enforce that? How can you verify that?
+
+....
+Trust Me Bruh. We don't Log anything ...
+....
+
+We have cryptography, We have zk. I think we can do better than just
+blind trust.
+
+Objectively speaking, and I’m not accusing anyone of anything so it’s
+just a hypothetical but if someone would give me some money and they
+asked me to come up with a system which let’s them practically
+monopolize access to DNS queries, I would propose ODoH.
+
+It has enough mumbo jumbo tech jargon(end-to-end-encrypted, …) to throw
+off your average layman and lul them into a false sense of security and
+privacy but it doesnt prevent the proxy and server provider from
+colluding. After all the technnical jargon, you end up with ``it’s
+safe'' and ``it’s private'' because ``you can trust us''.
+
+Now we can see that DoH, DoT and ODoH are all better than baseline DNS
+queries over UDP without encryption but they can’t satisfy both of our
+requirements.
+
+=== Solution
+
+Now let’s talk about the solution I at the time of writing this blog
+post.
+
+DoH or DoT is good enough to satisfy `Requirement001` but they need
+something a little extra to be able to satisfy `Requirement002`.
+
+For that, we use an anonymizing network like tor. DoT and DoH both work
+over TCP so we can use any SOCKS5 proxy here that ends up being a Tor
+proxy. What I mean is you can use a the Tor running on your host or you
+can use `ssh -L` to use Tor running on a VPS. That way, your internet
+proviedr can’t know you’re using Tor at all. With your DNS queries going
+over Tor, we can satisfy `Requirement002`. Tor is not the only solution
+here but I use Tor. There is more than one anonimyzing network out there
+and there are protocols that do this also.
+
+Right now we have an outline in our head:
+
+* We need to only use TCP for DNS and send everything over a Tor SOCKS5
+proxy.
+* we will be using DoT or DoH. This will be useful in two ways. One we
+ensure we are using TCP for DNS which is what most SOCKS5
+implementations support(even though they should support UDP because it’s
+SOCKS5 and not SOCKS4 but that’s another can of worms)
+
+There is more than one way to do this but I have decided to use
+https://github.com/DNSCrypt/dnscrypt-proxy[dnscrypt-proxy]. We will not
+be using dnscrypt for the dnscrypt protocol though you could elect to
+use that as the underlying DNS protocol. `dnscrypt-proxy` lets’s us use
+a SOCKS5 proxy through which the DNS queries will be sent. We will use a
+Tor SOCKS5 proxy here. You can choose which protocols should be enabled
+and which ones should be disabled. There are two points:
+
+* one, enable the tcp only option, since we dont want to use plain jane
+UDP queries.
+* two, I have asked `dnscrypt-proxy` to only use DNS servers that
+support DNSSEC.
+
+I recommend going through all the available options in the
+`dnscrypt-proxy.toml` file. It is one of those config files with
+comments so it’s pretty sweet. There are quite a few useful options in
+there that you might care about depending on your needs.
+
+==== Implementation
+
+Right now I run `dnscrypt-proxy` on a small alpine linux VM. I made it
+fancier by running the VM on a tmpfs storage pool. Basically mine is
+running entirely on RAM. I used to have `dnscrypt-proxy` running on a
+raspberry pi and had my openwrt router forward DNS queries to that
+raspberry pi. There is obviously no best solution here. Just pick one
+that works for you. Here you can find the vagrantfile I use for the DNS
+VM I use:
+
+[source,ruby]
+----
+ENV['VAGRANT_DEFAULT_PROVIDER'] = 'libvirt'
+Vagrant.require_version '>= 2.2.6'
+Vagrant.configure('2') do |config|
+ config.vm.box = 'generic/alpine319'
+ config.vm.box_version = '4.3.12'
+ config.vm.box_check_update = false
+ config.vm.hostname = 'virt-dns'
+
+ # ssh
+ config.ssh.insert_key = true
+ config.ssh.keep_alive = true
+ config.ssh.keys_only = true
+
+ # timeouts
+ config.vm.boot_timeout = 300
+ config.vm.graceful_halt_timeout = 60
+ config.ssh.connect_timeout = 30
+
+ # shares
+ config.vm.synced_folder '.', '/vagrant', type: 'nfs', nfs_version: 4, nfs_udp: false
+
+ config.vm.network :private_network, :ip => '192.168.121.93' , :libvirt__domain_name => 'devidns.local'
+
+ config.vm.provider 'libvirt' do |libvirt|
+ libvirt.storage_pool_name = 'ramdisk'
+ libvirt.default_prefix = 'dns-'
+ libvirt.driver = 'kvm'
+ libvirt.memory = '256'
+ libvirt.cpus = 2
+ libvirt.sound_type = nil
+ libvirt.qemuargs value: '-nographic'
+ libvirt.qemuargs value: '-nodefaults'
+ libvirt.qemuargs value: '-no-user-config'
+ libvirt.qemuargs value: '-serial'
+ libvirt.qemuargs value: 'pty'
+ libvirt.random model: 'random'
+ end
+
+ config.vm.provision 'reqs', type: 'shell', name: 'reqs-install', inline: <<-SHELL
+ sudo apk update &&\
+ sudo apk upgrade &&\
+ sudo apk add tor dnscrypt-proxy privoxy tmux
+ SHELL
+
+ config.vm.provision 'reqs-priv', type: 'shell', name: 'reqs-priv-install', privileged: true, inline: <<-SHELL
+ cp /vagrant/torrc /etc/tor/torrc
+ cp /vagrant/dnscrypt-proxy.toml /etc/dnscrypt-proxy/dnscrypt-proxy.toml
+ #cp /vagrant/config /etc/privoxy/config
+ rc-service tor start
+ sleep 1
+ #rc-service privoxy start
+ #sleep 1
+ rc-service dnscrypt-proxy start
+ SHELL
+end
+----
+
+It’s pretty straightforward. We use an alpine linux VM as base. Make a
+new interface on the VM with a static IP and have `dnscrypt-proxy`
+receive DNS queries through that interface and IP only. I don’t change
+the port number(53) because of certain applications(you know who you
+are) refusing to accept port for a DNS server’s address. You could also
+make it spicier by using `privoxy`. Maybe I make a post about that
+later.
+
+timestamp:1708814484
+
+version:1.0.0
+
+https://blog.terminaldweller.com/rss/feed
+
+https://raw.githubusercontent.com/terminaldweller/blog/main/mds/DNS.md
diff --git a/mds/NTP.txt b/mds/NTP.txt
new file mode 100644
index 0000000..8060191
--- /dev/null
+++ b/mds/NTP.txt
@@ -0,0 +1,184 @@
+== After NTP Comes NTS
+
+Well for this one I will be talking a bit about NTP and NTS. Unlike the
+DNS post there isnt much going on here.
+
+NTP is plain-text, NTS uses TLS so if our requests are tampered with, we
+can know. There is the ``oooh, you cant see what I’m sending now'' but
+in this case its NTP so the content being secret is not necessarily more
+important than making sure the content has not been modified(guarantee
+of integrity).
+
+So far so good. But before we go any further, lets talk about what we
+are trying to achieve here, in other works, what requirements are we
+trying to satisfy here:
+
+* REQ-001: The NTP(NTS) requests shall be anonymous
+* REQ-002: It shall be evient when an NTP(NTS) requests has been
+tampered with
+* REQ-003: It should not be known which time servers are being used
+upstream by the client
+
+Now talk about the problem. The protocol is fine. We are sending TCP
+with TLS here. That’s brilliant. We get all this:
+
+....
+* Identity: Through the use of a X.509 public key infrastructure, implementations can cryptographically establish the identity of the parties they are communicating with.
+* Authentication: Implementations can cryptographically verify that any time synchronization packets are authentic, i.e., that they were produced by an identified party and have not been modified in transit.
+* Confidentiality: Although basic time synchronization data is considered nonconfidential and sent in the clear, NTS includes support for encrypting NTP extension fields.
+* Replay prevention: Client implementations can detect when a received time synchronization packet is a replay of a previous packet.
+* Request-response consistency: Client implementations can verify that a time synchronization packet received from a server was sent in response to a particular request from the client.
+* Unlinkability: For mobile clients, NTS will not leak any information additional to NTP which would permit a passive adversary to determine that two packets sent over different networks came from the same client.
+* Non-amplification: Implementations (especially server implementations) can avoid acting as distributed denial-of-service (DDoS) amplifiers by never responding to a request with a packet larger than the request packet.
+* Scalability: Server implementations can serve large numbers of clients without having to retain any client-specific state.
+* Performance: NTS must not significantly degrade the quality of the time transfer. The encryption and authentication used when actually transferring time should be lightweight.
+....
+
+exerpt from https://www.rfc-editor.org/rfc/rfc8915[RFC 8915]
+
+If we find a client that lets us use a SOCKS5 proxy, then we can send
+our NTS requests over Tor and then call it a day. REQ-002 and REQ-003
+are being satisfied by using TLS. The missing piece is REQ-001,
+anonymizing the requests.
+
+This is not something for the protocol to handle so then we have to look
+for a client that support a SOCKS5 proxy.
+
+Unfortunately https://gitlab.com/chrony/chrony[chrony] and
+https://github.com/pendulum-project/ntpd-rs[ntpd-rs] do not support
+SOCKS5 proxies.
+
+* for ntpd-rs look
+https://github.com/pendulum-project/ntpd-rs/discussions/1365[here]
+
+Which menas our setup is not complete.
+
+=== Implementation
+
+We will be using ntpd-rs as the client. We will also setup one NTS
+server using https://gitlab.com/NTPsec/ntpsec[ntpsec].
+
+[source,toml]
+----
+[observability]
+log-level = "info"
+observation-path = "/var/run/ntpd-rs/observe"
+
+[[source]]
+mode = "nts"
+address = "virginia.time.system76.com"
+
+[[source]]
+mode = "nts"
+address = "mmo1.nts.netnod.se"
+
+[[source]]
+mode = "nts"
+address = "ntppool1.time.nl"
+
+[[source]]
+mode = "nts"
+address = "ntp1.glypnod.com"
+
+[[source]]
+mode = "nts"
+address = "ntp3.fau.de"
+
+[synchronization]
+single-step-panic-threshold = 1800
+startup-step-panic-threshold = { forward="inf", backward = 1800 }
+minimum-agreeing-sources = 3
+accumulated-step-panic-threshold = 1800
+
+[[server]]
+listen = "127.0.0.1:123"
+
+[[server]]
+listen = "172.17.0.1:123"
+
+[[server]]
+listen = "192.168.121.1:123"
+
+[[server]]
+listen = "10.167.131.1:123"
+
+[[server]]
+listen = "[::1]:123"
+----
+
+[source,config]
+----
+nts enable
+nts key /etc/letsencrypt/live/nts.dehein.org/privkey.pem
+nts cert /etc/letsencrypt/live/nts.dehein.org/fullchain.pem mintls TLS1.3
+nts cookie /var/lib/ntp/nts-keys
+nts-listen-on 4460
+server 0.0.0.0 prefer
+
+server ntpmon.dcs1.biz nts # Singapore
+server ntp1.glypnod.com nts # San Francisco
+server ntp2.glypnod.com nts # London
+
+tos maxclock 5
+
+restrict default kod limited nomodify noquery
+restrict -6 default kod limited nomodify noquery
+
+driftfile /var/lib/ntp/ntp.drift
+
+statsdir /var/log/ntpstats/
+----
+
+[source,yaml]
+----
+version: "3.9"
+services:
+ filebrowser:
+ image: ntpsec
+ build:
+ context: .
+ deploy:
+ resources:
+ limits:
+ memory: 128M
+ logging:
+ driver: "json-file"
+ options:
+ max-size: "50m"
+ networks:
+ - ntsnet
+ ports:
+ - "4460:4460/tcp"
+ restart: unless-stopped
+ entrypoint: ["ntpd"]
+ command: ["-n", "-I", "0.0.0.0", "-d", "5"]
+ volumes:
+ - ./ntp.conf:/etc/ntp.conf:ro
+ - /etc/letsencrypt/live/nts.dehein.org/fullchain.pem:/etc/letsencrypt/live/nts.dehein.org/fullchain.pem:ro
+ - /etc/letsencrypt/live/nts.dehein.org/privkey.pem:/etc/letsencrypt/live/nts.dehein.org/privkey.pem:ro
+ - vault:/var/lib/ntp
+ cap_drop:
+ - ALL
+ cap_add:
+ - SYS_NICE
+ - SYS_RESOURCE
+ - SYS_TIME
+networks:
+ ntsnet:
+volumes:
+ vault:
+----
+
+=== Links
+
+* https://www.rfc-editor.org/rfc/rfc8915[RFC 8915]
+* https://github.com/jauderho/nts-servers[Here] you can find a list of
+publicly available servers that support NTS
+
+timestamp:1709418680
+
+version:1.0.0
+
+https://blog.terminaldweller.com/rss/feed
+
+https://raw.githubusercontent.com/terminaldweller/blog/main/mds/NTP.md
diff --git a/mds/cstruct2luatable.txt b/mds/cstruct2luatable.txt
new file mode 100644
index 0000000..e95cc6b
--- /dev/null
+++ b/mds/cstruct2luatable.txt
@@ -0,0 +1,485 @@
+== C Struct to Lua table
+
+=== Overview
+
+For this tutorial we’ll change a C struct into a Lua table. The
+structure we’ll be using won’t be the simplest structure you’ll come
+across in the wild so hopefully the tutorial will do a little more than
+just cover the basics. We’ll add the structures as `userdata` and not as
+`lightuserdata`. Because of that, we won’t have to manage the memory
+ourselves, instead we will let Lua’s GC handle it for us. Disclaimer:
+
+* This turotial is not supposed to be a full dive into lua tables,
+metatables and their implementation or behavior. The tutorial is meant
+as an entry point into implementing custom Lua tables.
+
+==== Yet Another One?
+
+There are already a couple of tutorials on this, yes, but the ones I
+managed to find were all targeting older versions of lua and as the Lua
+devs have clearly stated, different Lua version are really different.
+The other reason I wrote this is I needed a structures that had
+structure members themselves and I couldn’t find a tutorial for that.
+This tutorial will be targeting Lua 5.3. We’ll also be using a
+not-so-simple structure to turn into a Lua table.
+
+==== What you’ll need
+
+* A working C compiler(I’ll be using clang)
+* Make
+* you can get the repo
+https://github.com/bloodstalker/blogstuff/tree/master/src/cstruct2luatbale[here].
+
+=== C Structs
+
+First let’s take a look at the C structures we’ll be using. The primary
+structure is called `a_t` which has, inside it, two more structures
+`b_t` and `c_t`:
+
+[source,c]
+----
+typedef struct {
+ uint64_t a_int;
+ double a_float;
+ char* a_string;
+ b_t* a_p;
+ c_t** a_pp;
+} a_t;
+----
+
+[source,c]
+----
+typedef struct {
+ uint32_t b_int;
+ double b_float;
+} b_t;
+----
+
+[source,c]
+----
+typedef struct {
+ char* c_string;
+ uint32_t c_int;
+} c_t;
+----
+
+The structures are purely artificial.
+
+=== First Step: Lua Types
+
+First let’s take a look at `a_t` and decide how we want to do this.
+`a_t` has five members:
+
+* `a_int` which in Lua we can turn into an `integer`.
+* `a_float` which we can turn into a `number`.
+* `a_string` which will be a Lua `string`.
+* `a_p` which is a pointer to another structure. As previously stated,
+we will turn this into a `userdata`.
+* `a_pp` which is a double pointer. We will turn this into a table of
+`userdata`.
+
+=== Second Step: Helper Functions
+
+Now let’s think about what we need to do. First we need to think about
+how we will be using our structures. For this example we will go with a
+pointer, i.e., our library code will get a pointer to the structure so
+we need to turn the table into `userdata`. Next, we want to be able to
+push and pop our new table from the Lua stack. We can also use Lua’s
+type check to make sure our library code complains when someone passes a
+bad type. We will also add functions for pushing the structure arguments
+onto the stack, a fucntion that acts as our constructor for our new
+table(more on that later) and getter and setter methods to access our C
+structures fields.
+
+Let’s start: First we will write a function that checks the type and
+returns the C structure:
+
+[source,c]
+----
+static a_t* pop_a_t(lua_State* ls, int index) {
+ a_t* dummy;
+ dummy = luaL_checkudata(ls, index, "a_t");
+ if (!dummy) printf("error:bad type, expected a_t\n");
+ return dummy;
+}
+----
+
+We check to see if the stack index we are getting is actually a userdata
+type and then check the type of the userdata we get to make sure we get
+the right userdata type. We check the type of the userdata by checking
+its metatable. We will get into that later. This amounts to our ``pop''
+functionality for our new type. Now let’s write a ``push'': The function
+will look like this:
+
+[source,c]
+----
+a_t* push_a_t(lua_State* ls) {
+ if (!lua_checkstack(ls, 1)) {
+ printf("o woe is me. no more room in hell...I mean stack...\n");return NULL;
+ }
+ a_t* dummy = lua_newuserdata(ls, sizeof(a_t));
+ luaL_getmetatable(ls, "a_t");
+ lua_setmetatable(ls, -2);
+ lua_pushlughtuserdata(ls, dummy);
+ lua_pushvalue(ls, -2);
+ lua_settable(ls, LUA_REGISTRYINDEX);
+ return dummy;
+}
+----
+
+Notice that we reserve new memory here using `lua_newuserdata` instead
+of `malloc` or what have you. This way we leave it up to Lua to handle
+the GC(in the real world however, you might not have the luxury of doing
+so). Now let’s talk about what we are actually doing here: First off we
+reserve memory for our new table using `lua_newuserdata`. Then we get
+and set the metatable that we will register later in the tutorial with
+Lua for our newly constructed userdata. Setting the metatable is our way
+of telling Lua what our userdata is, what methods it has along with some
+customizations that we will talk about later. We need to have a method
+of retrieving our full userdata when we need it. We do that by
+registering our userdata inside `LUA_REGISTRYINDEX`. We will need a key.
+for simplicity’s sake we use the pointer that `lua_newuserdata` returned
+as the key for each new full userdata. As for the value of the key, we
+will use the full userdata itself. That’s why we are using
+`lua_pushvalue`. Please note that lua doesn’t have a `push_fulluserdata`
+function and we can’t just pass the pointer to our userdata as the key
+since that would just be a lihgtuserdata and not a userdata so we just
+copy the fulluserdata onto the stack as the value for the key. Lastly we
+just set our key-value pair with `LUA_REGISTRYINDEX`.
+
+Next we will write a function that pushes the fields of the structure
+onto the stack:
+
+[source,c]
+----
+int a_t_push_args(lua_State* ls, a_t* a) {
+ if (!lua_checkstack(ls, 5)) {
+ printf("welp. lua doesn't love you today so no more stack space for you\n");
+ return 0;
+ }
+ lua_pushinteger(ls, a->a_int);
+ lua_pushnumber(ls, a->a_float);
+ lua_pushstring(ls, a->a_string);
+ push_b_t(ls);
+ lua_pushlightuserdata(ls, a->a_pp);
+ return 5;
+}
+----
+
+Notice that we are returning 5, since our new next function which is the
+new function expects to see the 5 fields on top of the stack.
+
+Next up is our new function:
+
+[source,c]
+----
+int new_a_t(lua_State* ls) {
+ if (!lua_checkstack(ls, 6)) {
+ printf("today isnt your day, is it?no more room on top of stack\n");
+ return 0;
+ }
+ int a_int = lua_tointeger(ls, -1);
+ float a_float = lua_tonumber(ls, -2);
+ char* a_string = lua_tostring(ls, -3);
+ void* a_p = lua_touserdata(ls, -4);
+ void** a_pp = lua_touserdata(ls, -5);
+ lua_pop(ls, 5);
+ a_t* dummy = push_a_t(ls);
+ dummy->a_int = a_int;
+ dummy->a_float = a_float;
+ dummy->a_string = a_string;
+ dummy->a_p = a_p;
+ dummy->a_pp = a_pp;
+ return 1;
+}
+----
+
+We just push an `a_t` on top of stack and then populate the fields with
+the values already on top of stack. The fact that we wrote tha two
+separate functions for pushing the arguments and returning a new table
+instance means we can use `new_a_t` as a constructor from lua as well.
+We’ll later talk about that.
+
+=== Third Step: Setters and Getters
+
+Now lets move onto writing our setter and getter functions. For the
+non-userdata types its fairly straightforward:
+
+[source,c]
+----
+static int getter_a_float(lua_State* ls) {
+ a_t* dummy = pop_a_t(ls, -1);
+ lua_pushnumber(ls, dummy->a_number);
+ return 1;
+}
+
+static int getter_a_string(lua_State* ls) {
+ a_t* dummy = pop_a_t(ls, -1);
+ lua_pushstring(ls, dummy->a_string);
+ return 1;
+}
+----
+
+As for the setters:
+
+[source,c]
+----
+static int setter_a_int(lua_State* ls) {
+ a_t* dummy = pop_a_t(ls, 1);
+ dummy->a_int = lua_checkinteger(ls, 2);
+ return 1;
+}
+----
+
+Now for the 4th and 5th fields:
+
+[source,c]
+----
+static int getter_a_p(lua_State *ls) {
+ a_t* dummy = pop_a_t(ls, 1);
+ lua_pop(ls, -1);
+ lua_pushlightuserdata(ls, dummy->a_p);
+ lua_gettable(ls, LUA_REGISTRYINDEX);
+ return 1;
+}
+----
+
+For the sake of laziness, let’s assume `a_t->a_int` denotes the number
+of entries in `a_t->a_pp`.
+
+[source,c]
+----
+static int getter_a_pp(lua_State* ls) {
+ a_t* dummy = pop_a_t(ls, 1);
+ lua_pop(ls, -1);
+ if (!lua_checkstack(ls, 3)) {
+ printf("sacrifice a keyboard to the moon gods or something... couldnt grow stack.\n");
+ return 0;
+ }
+ lua_newtable(ls);
+ for (uint64_t i = 0; i < dummy->a_int; ++i) {
+ lua_pushinteger(ls, i + 1);
+ if (dummy->a_pp[i] != NULL) {
+ lua_pushlightuserdata(ls, dummy->a_pp[i]);
+ lua_gettable(ls, LUA_REGISTRYINDEX);
+ } else {
+ lua_pop(ls, 1);
+ continue;
+ }
+ lua_settable(ls, -3);
+ }
+ return 1;
+}
+----
+
+Since we register all our tables with `LUA_REGISTRYINDEX` we just
+retreive the key which in our case, conviniently is the pointer to the
+userdata and retrieve the value(our userdata). As you can see, for
+setters we are assuming that the table itself is being passed as the
+first argument(the `pop_a_t` line assumes that).
+
+Our setters methods would be called like this in Lua:
+
+[source,lua]
+----
+local a = a_t()
+a:set_a_int(my_int)
+----
+
+The `:` operator in Lua is syntactic sugar. The second line from the
+above snippet is equivalent to `a.set_a_int(self, my_int)`. As you can
+see, the table itself will always be our first argument. That’s why our
+assumption above will always be true if the lua code is well-formed.
+
+We do the same steps above for `b_t` and `c_t` getter functions.
+
+Now let’s look at our setters:
+
+[source,c]
+----
+static int setter_a_string(lua_State *ls) {
+ a_t* dummy = pop_a_t(ls, 1);
+ dummy->a_string = lua_tostring(ls, 2);
+ lua_settop(ls, 1);
+ return 0;
+}
+
+static int setter_a_p(lua_State *ls) {
+ a_t* dummy = pop_a_t(ls, 1);
+ dummy->a_p = luaL_checkudata(ls, 2, "b_t");
+ lua_pop(ls, 1);
+ lua_settop(ls, 1);
+ return 0;
+}
+----
+
+[source,c]
+----
+static int setter_a_pp(lua_State* ls) {
+ a_t* dummy = pop_a_t(ls, 1);
+ dummy->a_pp = lua_newuserdata(ls, sizeof(void*));
+ if (!lua_checkstack(ls, 3)) {
+ printf("is it a curse or something? couldnt grow stack.\n");
+ return 0;
+ }
+ int table_length = lua_rawlen(ls, 2);
+ for (int i = 1; i <= table_length; ++i) {
+ lua_rawgeti(ls, 2, i);
+ dummy->a_pp[i - 1] = luaL_checkudata(ls, -1, "c_t");
+ lua_pop(ls, 1);
+ }
+ return 0;
+}
+----
+
+We are all done with the functions we needed for our new table. Now we
+need to register the metatable we kept using:
+
+== Fourth Step: Metatable
+
+First, if you haven’t already, take a look at the chapter on metatable
+and metamethods on pil https://www.lua.org/pil/13.html[here].
+
+[source,c]
+----
+static const luaL_Reg a_t_methods[] = {
+ {"new", new_a_t},
+ {"set_a_int", setter_a_int},
+ {"set_a_float", setter_a_float},
+ {"set_a_string", setter_a_string},
+ {"set_a_p", setter_a_p},
+ {"set_a_pp", setter_a_pp},
+ {"a_int", getter_a_int},
+ {"a_float", getter_a_float},
+ {"a_string", getter_a_string},
+ {"a_p", getter_a_p},
+ {"a_pp", getter_a_pp},
+ {0, 0}};
+
+static const luaL_Reg a_t_meta[] = {{0, 0}};
+----
+
+We just list the functions we want to be accessible inside Lua code. Lua
+expects the C functions that we register with Lua to have the form
+`(int)(func_ptr*)(lua_State*)`. Also, it’s a good idea to take a look at
+the metatable events that Lua 5.3 supports
+http://lua-users.org/wiki/MetatableEvents[here]. They provide
+customization options for our new table type(as an example we get the
+same functionality as C++ where we get to define what an operator does
+for our table type).
+
+Now we move on to registering our metatable with Lua:
+
+[source,c]
+----
+int a_t_register(lua_State *ls) {
+ lua_checkstack(ls, 4);
+ lua_newtable(ls);
+ luaL_setfuncs(ls, a_t_methods, 0);
+ luaL_newmetatable(ls, "a_t");
+ luaL_setfuncs(ls, a_t_methods, 0);
+ luaL_setfuncs(ls, a_t_meta, 0);
+ lua_pushliteral(ls, "__index");
+ lua_pushvalue(ls, -3);
+ lua_rawset(ls, -3);
+ lua_pushliteral(ls, "__metatable");
+ lua_pushvalue(ls, -3);
+ lua_rawset(ls, -3);
+ lua_setglobal(ls, "a_t");
+ return 0;
+}
+----
+
+Please note that we are registering the metatable as a global. It is
+generally not recommended to do so.Why you ask? Adding a new enrty to
+the global table in Lua means you are already reserving that keyword, so
+if another library also needs that key, you are going to have lots of
+fun(the term `fun` here is borrowed from the Dwarf Fortress literature).
+Entries in the global table will require Lua to look things up in the
+global table so it slows things down a bit, though whether the slow-down
+is signifacant enough really depends on you and your requirements.
+
+We are almost done with our new table but there is one thing remaining
+and that is our table doesn’t have a cozy constructor(Cozy constructors
+are not a thing. Seriously. I just made it up.). We can use our `new`
+function as a constructor, we have registered it with our metatable, but
+it requires you to pass all the arguments at the time of construction.
+Sometimes it’s convinient to hold off on passing all or some of the args
+at construction time, mostly because you are writing a library and your
+power users will do all sorts of unconventional and crazy/creative
+things with your library.
+
+Remember metatable events? That’s what we’ll use. Lua metatables support
+something called metatable events. Eeach event has a string key and the
+value is whatever you put as the value. The values are used whenever
+that event happens. Some the events are:
+
+* `__call`
+* `__pairs`
+* `__sub`
+* `__add`
+* `__gc` The `__sub` event is triggered when your table is the operand
+of a suntraction operator. `__gc` is used when lua want to dispose of
+the table so if you are handling the memory yourself, in contrast to
+letting Lua handle it for you, here’s where you free memory. The events
+are a powerful tool that help us customize how our new table behaves.
+
+For a constructor, we will use the `__call` event. That means when
+someone calls our metatable in Lua, like this(call event is triggered
+when our table is called, syntactically speaking):
+
+[source,lua]
+----
+local a = a_t()
+----
+
+`a` will become a new instance of our table. We can add a value for our
+metatable’s `__call` key from either Lua or C. Since we are talking
+about Lua and haven’t almost written anything in Lua, let’s do it in
+Lua:
+
+[source,lua]
+----
+setmetatable(a_t, {__call =
+ function(self, arg1, arg2, arg3, arg4, arg5)
+ local t = self.new(arg1, arg2, arg3, arg4, arg5)
+ return t
+ end
+ }
+)
+----
+
+We use our `new` method which we previously registered for our
+metatable. Note that Lua will pass `nil` for the argument if we don’t
+provide any. That’s how our cozy constructor works.
+
+=== Final Words
+
+The tutorial’s goal is to show you one way of doing the task and not
+necessarily the best way of doing it. Besides, depending on your
+situation, you might want to do things differently so by no means is
+this tutorial enough. It’s an entry level tutorial. Any feedback,
+suggestions and/or fixes to the tutorial is much appreciated.
+
+=== Shameless Plug
+
+I needed to turn a C struct into a lua table for an application I’m
+working https://github.com/bloodstalker/mutator/tree/master/bruiser[on].
+Further down the line, I needed to do the same for a lot more C structs
+with the possibility of me having to do the same for a lot more C
+structs. I just couldn’t bring myself to do it manually for that many C
+structs so I decided to work on a code generator that does that for me.
+The result is https://github.com/bloodstalker/luatablegen[luatablegen].
+`luatablegen` is a simple script that takes the description of your C
+structures in an XML file and generates the C code for your new tables
+and metatables. It does everything we did by hand automatically for us.
+`lautablegen` is in its early stages, so again, any feedback or help
+will be appreciated.
+
+timestamp:1705630055
+
+version:1.0.0
+
+https://blog.terminaldweller.com/rss/feed
+
+https://raw.githubusercontent.com/terminaldweller/blog/main/mds/cstruct2luatable.md
diff --git a/mds/howtogetyourSMSonIRC.txt b/mds/howtogetyourSMSonIRC.txt
new file mode 100644
index 0000000..438e7b0
--- /dev/null
+++ b/mds/howtogetyourSMSonIRC.txt
@@ -0,0 +1,228 @@
+== How to get your SMS on IRC
+
+It’s not really a continuation of the ``one client for everything'' post
+but it is in the same vein. Basically, in this post we are going to make
+it so that we receive our SMS messages on IRC. More specifically, it
+will send it to a IRC channel. In my case this works and is actually
+secure, since the channel I have the SMS going to is on my own IRC
+network which only allows users in after they do a successful SASL
+authentication.
+
+The general idea is this:
+
+* We run an app on our phone that will send the SMS to a web hook server
+* The web hook server has an IRC client that will send the message to
+the IRC channel
+
+=== security considerations
+
+==== SMS vs https://en.wikipedia.org/wiki/Rich_Communication_Services[RCS]
+
+For forwarding the SMS I get on my cellphone from my cellphone to the
+web hook server, i use
+https://github.com/bogkonstantin/android_income_sms_gateway_webhook[android_income_sms_gateway_webhook].
+This app does not support RCS(see
+https://github.com/bogkonstantin/android_income_sms_gateway_webhook/issues/46[#46]).
+For this to work, make sure your phone has RCS disabled unless you use
+another app that supports RCS.
+
+==== web hook server connection
+
+The app will be connecting to our web hook server. The ideal way I
+wanted to do this would be to connect to a VPN, only through which we
+can access the web hook server. But its android not linux. I dont know
+how I can do that on android so that’s a no go. Next idea is to use
+local port mapping using openssh to send the SMS through the ssh tunnel.
+While that is very feasible without rooting the phone, a one-liner in
+termux can take care of it but automating it is a bit of a hassle.
+Currently the only measure I am taking is to just use https instead of
+http. Since we are using only tls we can use the normal TLS hardening
+measures, server-side. We are using nginx as the reverse proxy. We will
+also terminate the tls connection on nginx. We will be using
+https://github.com/pocketbase/pocketbase[pocketbase] for the record
+storage and authentication. We can extend pocketbase which is exactly
+how we will be making our sms web hook. Pocketbase will give us the
+record storage and authentication/registration we need. We will use
+https://github.com/lrstanley/girc[girc] for our IRC library. My personal
+IRC network wll require successful SASL authentication before letting
+anyone into the network so supporting SASL auth(PLAIN) is a requirement.
+
+We can use basic http authentication using our chosen app. We can
+configure the JSON body of the POST request our web hook server will
+receive. The default POST request the app will send looks like this: For
+the body:
+
+[source,json]
+----
+{
+ "from": "%from%",
+ "text": "%text%",
+ "sentStamp": "%sentStamp%",
+ "receivedStamp": "%receivedStamp%",
+ "sim": "%sim%"
+}
+----
+
+And for the header:
+
+[source,json]
+----
+{ "User-Agent": "SMS Forwarder App" }
+----
+
+We get static cerdentials so we can only do basic http auth. We dont
+need to encode the client information into the security token so we’ll
+just rely on a bearer-token in the header for both authentication and
+authorization.
+
+==== Authentication and Authorization
+
+In our case, the only resource we have is to be able to post anything on
+the endpoint so in our case, authentication and authorization will be
+synonimous. We can put the basic auth cerdentials in the url:
+
+....
+https://user:pass@sms.mywebhook.com
+....
+
+Also do please remember that on the app side we need to add the
+authorization header like so:
+
+[source,json]
+----
+{"Content-Type": "application/json"; "Authorization": "Basic base64-encoded-username:password"}
+----
+
+As for the url, use your endpoint without using the username and passwor
+in the URI.
+
+=== Dev works
+
+You can find the finished code
+https://github.com/terminaldweller/sms-webhook[here].
+
+Here’s a brief explanation of what the code does: We launch the irc bot
+in a goroutine. The web hook server will only respond to POST requests
+on `/sms` after a successful basic http authentication. In our case
+there is no reason not to use a randomized username as well. So
+effectively we will have two secrets this way. You can create a new user
+in the pocketbase admin panel. Pocketbase comes with a default
+collection for users so just create a new entry in there.
+
+* The code will respond with a 401 for all failed authentication
+attempts.
+* We dont fill out missing credentials for non-existant users to make
+timing attacks harder. Thats something we can do later.
+
+=== Deployment
+
+[source,nginx]
+----
+events {
+ worker_connections 1024;
+}
+http {
+ include /etc/nginx/mime.types;
+ server_tokens off;
+ limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m;
+ server {
+ listen 443 ssl;
+ keepalive_timeout 60;
+ charset utf-8;
+ ssl_certificate /etc/letsencrypt/live/sms.terminaldweller.com/fullchain.pem;
+ ssl_certificate_key /etc/letsencrypt/live/sms.terminaldweller.com/privkey.pem;
+ ssl_ciphers HIGH:!aNULL:!MD5:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
+ ssl_protocols TLSv1.2 TLSv1.3;
+ ssl_session_cache shared:SSL:50m;
+ ssl_session_timeout 1d;
+ ssl_session_tickets off;
+ ssl_prefer_server_ciphers on;
+ tcp_nopush on;
+ add_header X-Content-Type-Options "nosniff" always;
+ add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
+ add_header X-Frame-Options SAMEORIGIN always;
+ add_header X-XSS-Protection "1; mode=block" always;
+ add_header Referrer-Policy "no-referrer";
+ fastcgi_hide_header X-Powered-By;
+
+ error_page 401 403 404 /404.html;
+ location / {
+ proxy_pass http://sms-webhook:8090;
+ }
+ }
+}
+----
+
+[source,yaml]
+----
+version: "3.9"
+services:
+ sms-webhook:
+ image: sms-webhook
+ build:
+ context: .
+ deploy:
+ resources:
+ limits:
+ memory: 256M
+ logging:
+ driver: "json-file"
+ options:
+ max-size: "100m"
+ networks:
+ - smsnet
+ restart: unless-stopped
+ depends_on:
+ - redis
+ volumes:
+ - pb-vault:/sms-webhook/pb_data
+ - ./config.toml:/opt/smswebhook/config.toml
+ cap_drop:
+ - ALL
+ dns:
+ - 9.9.9.9
+ environment:
+ - SERVER_DEPLOYMENT_TYPE=deployment
+ entrypoint: ["/sms-webhook/sms-webhook"]
+ command: ["serve", "--http=0.0.0.0:8090"]
+ nginx:
+ deploy:
+ resources:
+ limits:
+ memory: 128M
+ logging:
+ driver: "json-file"
+ options:
+ max-size: "100m"
+ image: nginx:stable
+ ports:
+ - "8090:443"
+ networks:
+ - smsnet
+ restart: unless-stopped
+ cap_drop:
+ - ALL
+ cap_add:
+ - CHOWN
+ - DAC_OVERRIDE
+ - SETGID
+ - SETUID
+ - NET_BIND_SERVICE
+ volumes:
+ - ./nginx.conf:/etc/nginx/nginx.conf:ro
+ - /etc/letsencrypt/live/sms.terminaldweller.com/fullchain.pem:/etc/letsencrypt/live/sms.terminaldweller.com/fullchain.pem:ro
+ - /etc/letsencrypt/live/sms.terminaldweller.com/privkey.pem:/etc/letsencrypt/live/sms.terminaldweller.com/privkey.pem:ro
+networks:
+ smsnet:
+ driver: bridge
+volumes:
+ pb-vault:
+----
+
+timestamp:1706042815
+
+version:1.1.0
+
+https://blog.terminaldweller.com/rss/feed
+
+https://raw.githubusercontent.com/terminaldweller/blog/main/mds/howtogetyourSMSonIRC.md
diff --git a/mds/lazymakefiles.txt b/mds/lazymakefiles.txt
new file mode 100644
index 0000000..09e9960
--- /dev/null
+++ b/mds/lazymakefiles.txt
@@ -0,0 +1,690 @@
+== Lazy Makefiles
+
+I kept finding myself needing to build some C or C++ code but I just
+couldn’t be bothered to write a makefile from ground up. My life’s too
+short for that. The code was either not that big of a deal or the build
+process was not anything complicated. Yes, I’m lazy. The alternative to
+writing a makefile is just typing in gcc or clang instead of make into
+the terminal. I know. The horror. It’s 2018. What sort of a barbarian
+does that? So I just decided to write a lazy makefile so I would never
+have to type in the name of my compiler of choice ever again. Mostly
+because that’s what you do with things that you love. Forget about them
+until you need them. We’re still talking about compilers and makefiles
+for your information. Don’t go assuming things about my personal life.
+
+First off, you can find the makefiles
+https://github.com/bloodstalker/lazymakefiles[here]. They are licensed
+under the Unlicense. And I’m using plural because there’s one for C and
+one for C++. Now that we are done with the mandatory whimsical
+introduction, let’s talk about the contents of the makefiles. There are
+also a couple of things to note:
+
+* The makefiles have been written with gnu make in mind.
+* Most targets will be fine with gcc but the full functionality is
+achieved by using clang.
+* This is not a makefile 101.
+* I’m not going to try to copy the makefile contents here line by line.
+You are expected to have the makefile open while reading this.
+* I will be explaining some of the more, let’s say, esoteric behaviours
+of make which can get the beginners confused.
+* gnu make variables are considered macros by C/C++ standards. I will
+use the term ``variable'' since it’s what the gnu make documents use.
+* The makefiles are not supposed to be hands-off. I change bits here and
+there from project to project.
+* The makefile recognizes the following extensions: `.c` and `.cpp`. If
+you use different extensions, change the makefile accordingly.
+
+=== The Macros
+
+`TARGET` holds the target name. It uses the `?=` assignment operator so
+you can pass it a different value from a script, just in case. There are
+a bunch of varibales that you can assign on the terminal to replace the
+makefile’s defaults. Among those there are some that are first getting a
+default value assigned and then get the `?=` assignemnt operator so you
+can assign them values from the terminal, e.g:
+
+[source,make]
+----
+CC=clang
+CC?=clang
+----
+
+It looks a bit backwards but there is a reason for that. The reason why
+we need to do that is because those variables are called
+`implicit variables` in gnu make terminology. Implicit variables are
+already defined by your makefile even if you havent defined them so they
+get some special treatment. In order to assign them values from the
+terminal, we first assign them a value and then use the `?=` operator on
+them. We don’t really need to assign the default value here again, but I
+felt like it would be more expressive to assign the default for a second
+time.
+
+Variables `CC_FLAGS`, `CXX_FLAGS` and `LD_FLAGS` have accompanying
+variables, namely `CC_FLAGS_EXTRA`, `CXX_FLAGS_EXTRA` and
+`LD_FLAGS_EXTRA`. The extra ones use the `?=` assignment. The scheme is
+to have the first set to host the invariant options and use the second
+set, to change the options that would need changing between different
+builds, if need be.
+
+The variable `BUILD_MODE` is used for the sanitizer builds of clang.
+`ADDSAN` will build the code with the address sanitizer. `MEMSAN` will
+build the code with memory sanitizer and `UBSAN` will build the code
+with undefined behaviour sanitizers. The build mode will affect all the
+other targets, meaning you will get a dynamically-linked executable in
+debug mode with address sanitizers if you assign `MEMSAN` to
+`BUILD_MODE`.
+
+=== Targets
+
+==== default
+
+The default target is `all`. `all` depends on `TARGET`.
+
+==== all
+
+`all` is an aggregate target. calling it will build, or rather, try to
+build everything(given your source-code’s sitation, some targets might
+not make any sense).
+
+==== depend
+
+`depend` depends on `.depend` which is a file generated by the makefile
+that holds the header dependencies. This is how we are making the
+makefile sensitive to header changes. The file’s contents look like
+this:
+
+[source,make]
+----
+main.c:main.h
+myfile1.c:myfile1.h myfile2.h
+----
+
+The inclusion directive is prefixed with a `-`. That’s make lingo for
+ignore-if-error. My shell prompt has a `make -q` part in it so just
+`cd`ing into a folder will generate the `.depend` file for me.Lazy and
+Convinient.
+
+==== Objects
+
+For the objects, there are three sets. You have the normal garden
+variety objects that end in `.o`. You get the debug enabled objects that
+end in `.odbg` and you get the instrumented objectes that are to be used
+for coverage that end in `.ocov`. I made the choice of having three
+distinct sets of objects since I personally sometimes struggle to
+remember whether the current objects are normal, debug or coverage. This
+way, I don’t need to. That’s the makefile’s problem now.
+
+==== TARGET
+
+Vanilla i.e. the dynamically-linked executable.
+
+==== TARGET-static
+
+The statically-linked executable.
+
+==== TARGET-dbg
+
+The dynamically-linked executble in debug mode.
+
+==== TARGET-cov
+
+The instrumented-for-coverage executable, dynaimclly-linked.
+
+==== cov
+
+The target generates the coverage report. it depend on `runcov` which
+itself, in turn, depends on `$(TARGET)-cov` so if you change `runcov` to
+how your executable should run, cov will handle rebuilding the objects
+and then running and generating the coverage report.
+
+==== covrep
+
+The exact same as above but generates coverage report in a different
+format.
+
+==== ASM
+
+Generates the assembly files for your objects, in intel style.
+
+==== SO
+
+Will try to build your target as a shared object.
+
+==== A
+
+Will try to build your target as an archive, i.e. static library.
+
+==== TAGS
+
+Depends on the `tags` target, generates a tags file. The tags file
+includes tags from the header files included by your source as well.
+
+==== valgrind
+
+Depends on `$(TARGET)` by default, runs valgrind with
+`--leak-check=yes`. You probably need to change this for the makefile to
+run your executable correctly.
+
+==== format
+
+Runs clang-format on all your source files and header files and ***EDITS
+THEM IN PLACE***. Expects a clang format file to be present in the
+directory.
+
+==== js
+
+Builds the target using emscripten and generates a javascript file.
+
+==== clean and deepclean
+
+`clean` cleans almost everything. `deepclean` depends on `clean`.
+basically a two level scheme so you can have two different sets of clean
+commands.
+
+==== help
+
+prints out the condensed version of what I’ve been trying to put into
+words.
+
+Well that’s about it. Below you can find the current(at the time of
+writing) versio of both the C and the Cpp makefiles. You can always find
+the latest versions
+https://raw.githubusercontent.com/terminaldweller/scripts/main/makefilec[here]
+for C and
+https://raw.githubusercontent.com/terminaldweller/scripts/main/makefilecpp[here]
+for Cpp.
+
+=== C
+
+[source,make]
+----
+TARGET?=main
+SHELL=bash
+SHELL?=bash
+CC=clang
+CC?=clang
+ifdef OS
+CC_FLAGS=
+else
+CC_FLAGS=-fpic
+endif
+CC_EXTRA?=
+CTAGS_I_PATH?=./
+LD_FLAGS=
+EXTRA_LD_FLAGS?=
+ADD_SANITIZERS_CC= -g -fsanitize=address -fno-omit-frame-pointer
+ADD_SANITIZERS_LD= -g -fsanitize=address
+MEM_SANITIZERS_CC= -g -fsanitize=memory -fno-omit-frame-pointer
+MEM_SANITIZERS_LD= -g -fsanitize=memory
+UB_SANITIZERS_CC= -g -fsanitize=undefined -fno-omit-frame-pointer
+UB_SANITIZERS_LD= -g -fsanitize=undefined
+FUZZ_SANITIZERS_CC= -fsanitize=fuzzer,address -g -fno-omit-frame-pointer
+FUZZ_SANITIZERS_LD= -fsanitize=fuzzer,address -g -fno-omit-frame-pointer
+COV_CC= -fprofile-instr-generate -fcoverage-mapping
+COV_LD= -fprofile-instr-generate
+# BUILD_MODES are=RELEASE(default), DEBUG,ADDSAN,MEMSAN,UBSAN,FUZZ
+BUILD_MODE?=RELEASE
+#EXCLUSION_LIST='(\bdip)|(\bdim)'
+EXCLUSION_LIST='xxxxxx'
+OBJ_LIST:=$(patsubst %.c, %.o, $(shell find . -name '*.c' | grep -Ev $(EXCLUSION_LIST)))
+OBJ_COV_LIST:=$(patsubst %.c, %.ocov, $(shell find . -name '*.c' | grep -Ev $(EXCLUSION_LIST)))
+OBJ_DBG_LIST:=$(patsubst %.c, %.odbg, $(shell find . -name '*.c' | grep -Ev $(EXCLUSION_LIST)))
+ASM_LIST:=$(patsubst %.c, %.s, $(shell find . -name '*.c' | grep -Ev $(EXCLUSION_LIST)))
+WASM_LIST:=$(patsubst %.c, %.wasm, $(shell find . -name '*.c' | grep -Ev $(EXCLUSION_LIST)))
+WAST_LIST:=$(patsubst %.c, %.wast, $(shell find . -name '*.c' | grep -Ev $(EXCLUSION_LIST)))
+IR_LIST:=$(patsubst %.c, %.ir, $(shell find . -name '*.c' | grep -Ev $(EXCLUSION_LIST)))
+JS_LIST:=$(patsubst %.c, %.js, $(shell find . -name '*.c' | grep -Ev $(EXCLUSION_LIST)))
+AST_LIST:=$(patsubst %.c, %.ast, $(shell find . -name '*.c' | grep -Ev $(EXCLUSION_LIST)))
+
+ifeq ($(BUILD_MODE), ADDSAN)
+ifeq ($(CC), gcc)
+$(error This build mode is only useable with clang.)
+endif
+CC_EXTRA+=$(ADD_SANITIZERS_CC)
+EXTRA_LD_FLAGS+=$(ADD_SANITIZERS_LD)
+endif
+
+ifeq ($(BUILD_MODE), MEMSAN)
+ifeq ($(CC), gcc)
+$(error This build mode is only useable with clang.)
+endif
+CC_EXTRA+=$(MEM_SANITIZERS_CC)
+EXTRA_LD_FLAGS+=$(MEM_SANITIZERS_LD)
+endif
+
+ifeq ($(BUILD_MODE), UBSAN)
+ifeq ($(CC), gcc)
+$(error This build mode is only useable with clang.)
+endif
+CC_EXTRA+=$(UB_SANITIZERS_CC)
+EXTRA_LD_FLAGS+=$(UB_SANITIZERS_LD)
+endif
+
+ifeq ($(BUILD_MODE), FUZZ)
+ifeq ($(CXX), g++)
+$(error This build mode is only useable with clang++.)
+endif
+CXX_EXTRA+=$(FUZZ_SANITIZERS_CC)
+EXTRA_LD_FLAGS+=$(FUZZ_SANITIZERS_LD)
+endif
+
+SRCS:=$(wildcard *.c)
+HDRS:=$(wildcard *.h)
+CC_FLAGS+=$(CC_EXTRA)
+LD_FLAGS+=$(EXTRA_LD_FLAGS)
+
+.DEFAULT:all
+
+.PHONY:all clean help ASM SO TAGS WASM JS IR WAST A ADBG AST cppcheck DOCKER
+
+all:$(TARGET)
+
+everything:$(TARGET) A ASM SO $(TARGET)-static $(TARGET)-dbg ADBG TAGS $(TARGET)-cov WASM JS IR WAST AST DOCKER
+
+depend:.depend
+
+.depend:$(SRCS)
+ rm -rf .depend
+ $(CC) -MM $(CC_FLAGS) $^ > ./.depend
+ echo $(patsubst %.o:, %.odbg:, $(shell $(CC) -MM $(CC_FLAGS) $^)) | sed -r 's/[A-Za-z0-9\-\_]+\.odbg/\n&/g' >> ./.depend
+ echo $(patsubst %.o:, %.ocov:, $(shell $(CC) -MM $(CC_FLAGS) $^)) | sed -r 's/[A-Za-z0-9\-\_]+\.ocov/\n&/g' >> ./.depend
+
+-include ./.depend
+
+.c.o:
+ $(CC) $(CC_FLAGS) -c $< -o $@
+
+%.odbg:%.c
+ $(CC) $(CC_FLAGS) -g -c $< -o $@
+
+%.ocov:%.c
+ $(CC) $(CC_FLAGS) $(COV_CC) -c $< -o $@
+
+$(TARGET): $(OBJ_LIST)
+ $(CC) $(LD_FLAGS) $^ -o $@
+
+$(TARGET)-static: $(OBJ_LIST)
+ $(CC) $(LD_FLAGS) $^ -static -o $@
+
+$(TARGET)-dbg: $(OBJ_DBG_LIST)
+ $(CC) $(LD_FLAGS) $^ -g -o $@
+
+$(TARGET)-cov: $(OBJ_COV_LIST)
+ $(CC) $(LD_FLAGS) $^ $(COV_LD) -o $@
+
+cov: runcov
+ @llvm-profdata merge -sparse ./default.profraw -o ./default.profdata
+ @llvm-cov show $(TARGET)-cov -instr-profile=default.profdata
+
+covrep: runcov
+ @llvm-profdata merge -sparse ./default.profraw -o ./default.profdata
+ @llvm-cov report $(TARGET)-cov -instr-profile=default.profdata
+
+ASM:$(ASM_LIST)
+
+SO:$(TARGET).so
+
+A:$(TARGET).a
+
+ADBG:$(TARGET).adbg
+
+IR:$(IR_LIST)
+
+WASM:$(WASM_LIST)
+
+WAST:$(WAST_LIST)
+
+JS:$(JS_LIST)
+
+AST:$(AST_LIST)
+
+TAGS:tags
+
+#https://github.com/rizsotto/Bear
+BEAR: clean
+ bear -- make
+
+tags:$(SRCS)
+ $(shell $(CC) -c -I $(CTAGS_I_PATH) -M $(SRCS)|\
+ sed -e 's/[\\ ]/\n/g'|sed -e '/^$$/d' -e '/\.o:[ \t]*$$/d'|\
+ ctags -L - --c++-kinds=+p --fields=+iaS --extra=+q)
+
+%.s: %.c
+ $(CC) -S $< -o $@
+ # objdump -r -d -M intel -S $< > $@
+
+%.ir: %.c
+ $(CC) -emit-llvm -S -o $@ $<
+
+%.wasm: %.c
+ emcc $< -o $@
+
+%.wast: %.wasm
+ wasm2wat $< > $@
+
+%.js: %.c
+ emcc $< -s FORCE_FILESYSTEM=1 -s EXIT_RUNTIME=1 -o $@
+
+%.ast: %.c
+ $(CC) -Xclang -ast-dump -fsyntax-only $< > $@
+
+$(TARGET).so: $(OBJ_LIST)
+ $(CC) $(LD_FLAGS) $^ -shared -o $@
+
+$(TARGET).a: $(OBJ_LIST)
+ ar rcs $(TARGET).a $(OBJ_LIST)
+
+$(TARGET).adbg: $(OBJ_DBG_LIST)
+ ar rcs $(TARGET).adbg $(OBJ_DBG_LIST)
+
+runcov: $(TARGET)-cov
+ "./$(TARGET)-cov"
+
+test: $(TARGET)
+ "./$(TARGET)"
+
+run: $(TARGET)
+ "./$(TARGET)"
+
+valgrind: $(TARGET)
+ - valgrind --track-origins=yes --leak-check=full --show-leak-kinds=all "./$(TARGET)"
+
+cppcheck:
+ cppcheck $(SRCS)
+
+rundbg: $(TARGET)-dbg
+ gdb --batch --command=./debug.dbg --args "./$(TARGET)-dbg"
+
+format:
+ - clang-format -i $(SRCS) $(HDRS)
+
+DOCKER: Dockerfile
+ docker build -t proto ./
+
+clean:
+ - rm -f *.o *.s *.odbg *.ocov *.js *.ir *~ $(TARGET) $(TARGET).so $(TARGET)-static \
+ $(TARGET)-dbg $(TARGET).a $(TARGET)-cov *.wasm *.wast $(TARGET).adbg *.ast
+
+deepclean: clean
+ - rm tags
+ - rm .depend
+ - rm ./default.profraw ./default.profdata
+ - rm vgcore.*
+ - rm compile_commands.json
+ - rm *.gch
+
+help:
+ @echo "--all is the default target, runs $(TARGET) target"
+ @echo "--everything will build everything"
+ @echo "--SO will generate the so"
+ @echo "--ASM will generate assembly files"
+ @echo "--TAGS will generate tags file"
+ @echo "--BEAR will generate a compilation database"
+ @echo "--IR will generate llvm IR"
+ @echo "--JS will make the js file"
+ @echo "--AST will make the llvm ast file"
+ @echo "--WASM will make the wasm file"
+ @echo "--WAST will make the wasm text debug file"
+ @echo "--$(TARGET) builds the dynamically-linked executable"
+ @echo "--$(TARGET)-dbg will generate the debug build. BUILD_MODE should be set to DEBUG to work"
+ @echo "--$(TARGET)-static will statically link the executable to the libraries"
+ @echo "--$(TARGET)-cov is the coverage build"
+ @echo "--cov will print the coverage report"
+ @echo "--covrep will print the line coverage report"
+ @echo "--A will build the static library"
+ @echo "--TAGS will build the tags file"
+ @echo "--clean"
+ @echo "--deepclean will clean almost everything"
+----
+
+=== Cpp
+
+[source,make]
+----
+TARGET?=main
+SHELL=bash
+SHELL?=bash
+CXX=clang++
+CXX?=clang++
+ifdef OS
+CXX_FLAGS=-std=c++20
+else
+CXX_FLAGS=-std=c++20 -fpic
+endif
+CXX_EXTRA?=
+CTAGS_I_PATH?=./
+LD_FLAGS= -include-pch header.hpp.gch
+EXTRA_LD_FLAGS?=
+ADD_SANITIZERS_CC= -g -fsanitize=address -fno-omit-frame-pointer
+ADD_SANITIZERS_LD= -g -fsanitize=address
+MEM_SANITIZERS_CC= -g -fsanitize=memory -fno-omit-frame-pointer
+MEM_SANITIZERS_LD= -g -fsanitize=memory
+UB_SANITIZERS_CC= -g -fsanitize=undefined -fno-omit-frame-pointer
+UB_SANITIZERS_LD= -g -fsanitize=undefined
+FUZZ_SANITIZERS_CC= -fsanitize=fuzzer,address -g -fno-omit-frame-pointer
+FUZZ_SANITIZERS_LD= -fsanitize=fuzzer,address -g -fno-omit-frame-pointer
+COV_CXX= -fprofile-instr-generate -fcoverage-mapping
+COV_LD= -fprofile-instr-generate
+# BUILD_MODES are=RELEASE(default), DEBUG,ADDSAN,MEMSAN,UBSAN,FUZZ
+BUILD_MODE?=RELEASE
+#EXCLUSION_LIST='(\bdip)|(\bdim)'
+EXCLUSION_LIST='xxxxxx'
+OBJ_LIST:=$(patsubst %.cpp, %.o, $(shell find . -name '*.cpp' | grep -Ev $(EXCLUSION_LIST)))
+OBJ_COV_LIST:=$(patsubst %.cpp, %.ocov, $(shell find . -name '*.cpp' | grep -Ev $(EXCLUSION_LIST)))
+OBJ_DBG_LIST:=$(patsubst %.cpp, %.odbg, $(shell find . -name '*.cpp' | grep -Ev $(EXCLUSION_LIST)))
+ASM_LIST:=$(patsubst %.cpp, %.s, $(shell find . -name '*.cpp' | grep -Ev $(EXCLUSION_LIST)))
+WASM_LIST:=$(patsubst %.cpp, %.wasm, $(shell find . -name '*.cpp' | grep -Ev $(EXCLUSION_LIST)))
+WAST_LIST:=$(patsubst %.cpp, %.wast, $(shell find . -name '*.cpp' | grep -Ev $(EXCLUSION_LIST)))
+IR_LIST:=$(patsubst %.cpp, %.ir, $(shell find . -name '*.cpp' | grep -Ev $(EXCLUSION_LIST)))
+JS_LIST:=$(patsubst %.cpp, %.js, $(shell find . -name '*.cpp' | grep -Ev $(EXCLUSION_LIST)))
+AST_LIST:=$(patsubst %.cpp, %.ast, $(shell find . -name '*.cpp' | grep -Ev $(EXCLUSION_LIST)))
+
+ifeq ($(BUILD_MODE), ADDSAN)
+ifeq ($(CXX), g++)
+$(error This build mode is only useable with clang++.)
+endif
+CXX_EXTRA+=$(ADD_SANITIZERS_CC)
+EXTRA_LD_FLAGS+=$(ADD_SANITIZERS_LD)
+endif
+
+ifeq ($(BUILD_MODE), MEMSAN)
+ifeq ($(CXX), g++)
+$(error This build mode is only useable with clang++.)
+endif
+CXX_EXTRA+=$(MEM_SANITIZERS_CC)
+EXTRA_LD_FLAGS+=$(MEM_SANITIZERS_LD)
+endif
+
+ifeq ($(BUILD_MODE), UBSAN)
+ifeq ($(CXX), g++)
+$(error This build mode is only useable with clang++.)
+endif
+CXX_EXTRA+=$(UB_SANITIZERS_CC)
+EXTRA_LD_FLAGS+=$(UB_SANITIZERS_LD)
+endif
+
+ifeq ($(BUILD_MODE), FUZZ)
+ifeq ($(CXX), g++)
+$(error This build mode is only useable with clang++.)
+endif
+CXX_EXTRA+=$(FUZZ_SANITIZERS_CC)
+EXTRA_LD_FLAGS+=$(FUZZ_SANITIZERS_LD)
+endif
+
+SRCS:=$(wildcard *.cpp)
+HDRS:=$(wildcard *.h)
+CXX_FLAGS+=$(CXX_EXTRA)
+LD_FLAGS+=$(EXTRA_LD_FLAGS)
+
+.DEFAULT:all
+
+.PHONY:all clean help ASM SO TAGS WASM JS exe IR WAST A ADBG AST cppcheck DOCKER
+
+all:exe
+
+everything:$(TARGET) A ASM SO $(TARGET)-static $(TARGET)-dbg ADBG TAGS $(TARGET)-cov WASM JS IR WAST AST DOCKER
+
+depend:.depend
+
+.depend:$(SRCS)
+ rm -rf .depend
+ $(CXX) -MM $(CXX_FLAGS) $^ > ./.depend
+ echo $(patsubst %.o:, %.odbg:, $(shell $(CXX) -MM $(CXX_FLAGS) $^)) | sed -r 's/[A-Za-z0-9\-\_]+\.odbg/\n&/g' >> ./.depend
+ echo $(patsubst %.o:, %.ocov:, $(shell $(CXX) -MM $(CXX_FLAGS) $^)) | sed -r 's/[A-Za-z0-9\-\_]+\.ocov/\n&/g' >> ./.depend
+
+-include ./.depend
+
+.cpp.o: header.hpp.gch
+ $(CXX) $(CXX_FLAGS) -c $< -o $@
+
+%.odbg:%.cpp
+ $(CXX) $(CXX_FLAGS) -g -c $< -o $@
+
+%.ocov:%.cpp
+ $(CXX) $(CXX_FLAGS) $(COV_CXX) -c $< -o $@
+
+header.hpp.gch:header.hpp
+ $(CXX) $(CXX_FLAGS) -c $< -o $@
+
+exe: header.hpp.gch $(TARGET)
+
+$(TARGET): $(OBJ_LIST)
+ $(CXX) $(LD_FLAGS) $^ -o $@
+
+$(TARGET)-static: $(OBJ_LIST)
+ $(CXX) $(LD_FLAGS) $^ -static -o $@
+
+$(TARGET)-dbg: $(OBJ_DBG_LIST)
+ $(CXX) $(LD_FLAGS) $^ -g -o $@
+
+$(TARGET)-cov: $(OBJ_COV_LIST)
+ $(CXX) $(LD_FLAGS) $^ $(COV_LD) -o $@
+
+cov: runcov
+ @llvm-profdata merge -sparse ./default.profraw -o ./default.profdata
+ @llvm-cov show $(TARGET)-cov -instr-profile=default.profdata
+
+covrep: runcov
+ @llvm-profdata merge -sparse ./default.profraw -o ./default.profdata
+ @llvm-cov report $(TARGET)-cov -instr-profile=default.profdata
+
+ASM:$(ASM_LIST)
+
+SO:$(TARGET).so
+
+A:$(TARGET).a
+
+ADBG:$(TARGET).adbg
+
+IR:$(IR_LIST)
+
+WASM:$(WASM_LIST)
+
+WAST:$(WAST_LIST)
+
+JS:$(JS_LIST)
+
+AST:$(AST_LIST)
+
+TAGS:tags
+
+#https://github.com/rizsotto/Bear
+BEAR: clean
+ bear -- make
+
+tags:$(SRCS)
+ $(shell $(CXX) -c -I $(CTAGS_I_PATH) -M $(SRCS)|\
+ sed -e 's/[\\ ]/\n/g'|sed -e '/^$$/d' -e '/\.o:[ \t]*$$/d'|\
+ ctags -L - --c++-kinds=+p --fields=+iaS --extra=+q)
+
+%.s: %.cpp
+ $(CXX) -S $< -o $@
+ # objdump -r -d -M intel -S $< > $@
+
+%.ir: %.cpp
+ $(CXX) -emit-llvm -S -o $@ $<
+
+%.wasm: %.cpp
+ em++ $< -o $@
+
+%.wast: %.wasm
+ wasm2wat $< > $@
+
+%.js: %.cpp
+ em++ $< -s FORCE_FILESYSTEM=1 -s EXIT_RUNTIME=1 -o $@
+
+%.ast: %.cpp
+ $(CXX) -Xclang -ast-dump -fsyntax-only $< > $@
+
+$(TARGET).so: $(OBJ_LIST)
+ $(CXX) $(LD_FLAGS) $^ -shared -o $@
+
+$(TARGET).a: $(OBJ_LIST)
+ ar rcs $(TARGET).a $(OBJ_LIST)
+
+$(TARGET).adbg: $(OBJ_DBG_LIST)
+ ar rcs $(TARGET).adbg $(OBJ_DBG_LIST)
+
+runcov: $(TARGET)-cov
+ "./$(TARGET)-cov"
+
+test: $(TARGET)
+ "./$(TARGET)"
+
+run: $(TARGET)
+ "./$(TARGET)"
+
+valgrind: $(TARGET)
+ - valgrind --track-origins=yes --leak-check=full --show-leak-kinds=all "./$(TARGET)"
+
+cppcheck:
+ cppcheck $(SRCS)
+
+rundbg: $(TARGET)-dbg
+ gdb --batch --command=./debug.dbg --args "./$(TARGET)-dbg"
+
+format:
+ - clang-format -i $(SRCS) $(HDRS)
+
+DOCKER: Dockerfile
+ docker buld -t proto ./
+
+clean:
+ - rm -f *.o *.dis *.odbg *.ocov *.js *.ir *~ $(TARGET) $(TARGET).so $(TARGET)-static \
+ $(TARGET)-dbg $(TARGET).a $(TARGET)-cov *.wasm *.wast $(TARGET).adbg *.ast
+
+deepclean: clean
+ - rm tags
+ - rm .depend
+ - rm ./default.profraw ./default.profdata
+ - rm vgcore.*
+ - rm compile_commands.json
+ - rm *.gch
+
+help:
+ @echo "--all is the default target, runs $(TARGET) target"
+ @echo "--everything will build everything"
+ @echo "--SO will generate the so"
+ @echo "--ASM will generate assembly files"
+ @echo "--TAGS will generate tags file"
+ @echo "--BEAR will generate a compilation database"
+ @echo "--IR will generate llvm IR"
+ @echo "--$(TARGET) builds the dynamically-linked executable"
+ @echo "--$(TARGET)-dbg will generate the debug build. BUILD_MODE should be set to DEBUG to work"
+ @echo "--$(TARGET)-static will statically link the executable to the libraries"
+ @echo "--$(TARGET)-cov is the coverage build"
+ @echo "--cov will print the coverage report"
+ @echo "--covrep will print the line coverage report"
+ @echo "--A will build the static library"
+ @echo "--TAGS will build the tags file"
+ @echo "--clean"
+ @echo "--deepclean will clean almost everything"
+----
+
+timestamp:1705630055
+
+version:1.1.0
+
+https://blog.terminaldweller.com/rss/feed
+
+https://raw.githubusercontent.com/terminaldweller/blog/main/mds/lazymakefiles.md
diff --git a/mds/oneclientforeverything.txt b/mds/oneclientforeverything.txt
new file mode 100644
index 0000000..9d61a5b
--- /dev/null
+++ b/mds/oneclientforeverything.txt
@@ -0,0 +1,253 @@
+== One Client for Everything
+
+== Table of Contents
+
+[arabic]
+. link:#foreword[Foreword]
+. link:#two-ways-of-solving-this[Two ways of solving this]
+. link:#the-web-app-way[The web app way]
+. link:#gui-or-terminal-client[gui or terminal client]
+. link:#matrix-or-irc[Matrix or IRC]
+
+=== Foreword
+
+First let’s talk about the problem we’re trying to solve here. I want to
+have a unified interface into all the communication forms that I use. I
+can’t be bothered to have different clients open all the time. I want to
+have one client that takes care of all things mostly well.
+
+=== Two ways of solving this
+
+There is generally two ways one can try to solve this. Number one is to
+just use a browser. Almost all forms of comm nowadays have a web client
+so basically one way of solving our problem is to a dedicated browser
+that has all the clients open. Mind you, there are even specialized and
+more lightweight browser offerings specifically geared towards this
+use-case but still this option is not ideal in terms of resources and
+the interface you’re getting is not really unified.
+
+==== The web app way
+
+An example that comes to mind for this sort of solution is `rambox`
+though they are no longer offering a FOSS solution. I’m just mentioning
+them as an example of what’s being offered out there as a ready-to-use
+solution.
+
+Although this way of doing things is very resource-intensive, this is
+the *complete* way of doing things. What I mean by that is that by using
+the official web apps, you will not be compromising on any features that
+the clients offer since you will be using the official clients.
+
+==== gui or terminal client
+
+The second way of going about and solving this is to pick a very good
+client that supports a protocol with a lot of bridges and then bridge
+everything through to the app of that one protocol. Currently there are
+only three protocols that have enough facilities for bridging to make
+this feasible. IRC, Matrix and XMPP. I’m adding XMPP for the sake of
+completion but in terms of practicality XMPP doesn’t have nearly as many
+bridges as IRC and Matrix.
+
+So this basically narrows down our choice to either IRC or Matrix. Now
+lets look at the clients that are available for these two protocols.
+
+==== Matrix or IRC
+
+The last requirement on my side is that i would rather use a unified
+terminal keyboard-based client than a web application client. That being
+said, i definitely expect to use a web client since using a terminal
+client on a smart phone is pretty much just pain. A lot of pain.
+
+Unfortunately at the time of writing this post, Matrix has no terminal
+client that comes close to either https://github.com/irssi/irssi[irssi]
+or https://github.com/weechat/weechat[weechat], both terminal clients
+originally only supporting IRC but later advertising themselves as
+multi-chat clients. Also as an added bonus, starting from the next irssi
+release which should be irssi v1.5 one can elect not to build the IRC
+module at all while building irssi.
+
+Matrix and IRC both have a rich ecosystem of bridges. Matrix has a
+growing fan base which means more and more bridges or tools with similar
+functionality will be releases for it. Contrast that with IRC where that
+number seems to be smaller than Matrix but still is very much alive and
+well.
+
+=== https://github.com/bitlbee/bitlbee[bitlbee-libpurple]
+
+....
+it'll be bitlbee
+....
+
+bitlbee is a bridge software for IRC. The distinguishing feature for
+bitlbee is that the way it bridges other protocols to IRC is by
+masquerading as an ircd. You could also use libpurple as the backend for
+bitlbee (https://wiki.bitlbee.org/HowtoPurple[link]). libpurple has an
+origin story similar to libreadline. Basically it used to live inside
+pidgin, but later on it was turned into a library so that other
+applications could use it as well.
+
+List of protocols supported by libpurple:
+
+....
+aim
+bitlbee-discord
+bitlbee-mastodon
+bonjour
+eionrobb-icyque
+eionrobb-mattermost
+eionrobb-rocketchat
+facebook
+gg
+hangouts
+hehoe-signald
+hehoe-whatsmeow
+icq
+irc
+jabber
+matrix
+meanwhile
+novell
+otr
+simple
+sipe
+skypeweb
+slack
+steam
+telegram-tdlib
+zephyr
+....
+
+=== https://github.com/42wim/matterbridge[matterbridge]
+
+matterbridge is an everything-to-everything bridge.
+
+Please keep in mind that with matterbridge, you don’t get the full
+functionality of a protocol as in you get no private messages and such.
+You get the ability to join public chat rooms or whatever they call it
+in that protocol.
+
+=== bridge ircds
+
+==== https://github.com/42wim/matterircd[matterircd]
+
+a mattermost bridge that emulates an ircd as the name implies.
+
+==== https://github.com/progval/matrix2051[matrix2051]
+
+another bridge that emulates an ircd, but for matrix.
+
+==== https://github.com/adsr/irslackd[irslackd]
+
+a bridge to slack that emulates an ircd.
+
+==== docker compose
+
+https://github.com/ezkrg/docker-bitlbee-libpurple[Here]’s the original
+Dockerfile. You can find mine
+https://github.com/terminaldweller/docker-bitlbee-libpurple[here]. And
+here’s the docker compose file I use that goes with that:
+
+[source,yaml]
+----
+version: "3.8"
+services:
+ bitlbee:
+ image: devi_bitlbee
+ deploy:
+ resources:
+ limits:
+ memory: 384M
+ logging:
+ driver: "json-file"
+ options:
+ max-size: "100m"
+ networks:
+ - bitlbeenet
+ ports:
+ - "127.0.0.1:8667:6667"
+ - "172.17.0.1:8667:6667"
+ restart: unless-stopped
+ user: "bitlbee:bitlbee"
+ command:
+ [
+ "/usr/sbin/bitlbee",
+ "-F",
+ "-n",
+ "-u",
+ "bitlbee",
+ "-c",
+ "/var/lib/bitlbee/bitlbee.conf",
+ "-d",
+ "/var/lib/bitlbee",
+ ]
+ dns:
+ - 9.9.9.9
+ volumes:
+ - ./conf/bitlbee.conf:/var/lib/bitlbee/bitlbee.conf:ro
+ - userdata:/var/lib/bitlbee
+ - /home/devi/.cache/docker-bitlbee/signald/run:/var/run/signald
+ - /etc/ssl/certs:/etc/ssl/certs:ro
+ signald:
+ image: signald/signald:stable
+ deploy:
+ resources:
+ limits:
+ memory: 384M
+ logging:
+ driver: "json-file"
+ options:
+ max-size: "100m"
+ networks:
+ - signalnet
+ ports:
+ - "127.0.0.1:7775:7775"
+ - "172.17.0.1:7775:7775"
+ restart: unless-stopped
+ dns:
+ - 9.9.9.9
+ volumes:
+ - /home/devi/.cache/docker-bitlbee/signald/run:/signald
+ - /etc/ssl/certs:/etc/ssl/certs:ro
+ environment:
+ - SIGNALD_ENABLE_METRICS=false
+ - SIGNALD_HTTP_LOGGING=true
+ - SIGNALD_VERBOSE_LOGGING=true
+ - SIGNALD_METRICS_PORT=7775
+ - SIGNALD_LOG_DB_TRANSACTIONS=true
+ matterircd:
+ image: 42wim/matterircd:latest
+ deploy:
+ resources:
+ limits:
+ memory: 384M
+ logging:
+ driver: "json-file"
+ options:
+ max-size: "100m"
+ networks:
+ - matterircdnet
+ ports:
+ - "127.0.0.1:7667:7667"
+ - "172.17.0.1:7667:7667"
+ dns:
+ - 9.9.9.9
+ restart: unless-stopped
+ command: ["--conf", "/matterircd.toml"]
+ volumes:
+ - ./matterircd.toml:/matterircd.toml:ro
+networks:
+ bitlbeenet:
+ signalnet:
+ matterircdnet:
+volumes:
+ userdata:
+ matterircddb:
+----
+
+timestamp:1699398469
+
+version:0.1.0
+
+https://blog.terminaldweller.com/rss/feed
+
+https://raw.githubusercontent.com/terminaldweller/blog/main/mds/oneclientforeverything.md