diff options
Diffstat (limited to '')
-rw-r--r-- | mds/DNS.txt | 44 | ||||
-rw-r--r-- | mds/NTP.txt | 14 | ||||
-rw-r--r-- | mds/cstruct2luatable.txt | 134 | ||||
-rw-r--r-- | mds/disposablefirefox.md | 222 | ||||
-rw-r--r-- | mds/howtogetyourSMSonIRC.txt | 4 | ||||
-rw-r--r-- | mds/lazymakefiles.txt | 102 | ||||
-rw-r--r-- | mds/oneclientforeverything.txt | 4 | ||||
-rw-r--r-- | mds/securedocker.txt | 14 |
8 files changed, 293 insertions, 245 deletions
diff --git a/mds/DNS.txt b/mds/DNS.txt index d2bc173..c461ce2 100644 --- a/mds/DNS.txt +++ b/mds/DNS.txt @@ -53,12 +53,12 @@ men-in-the-middle are not allowed to snoop on what we are querying. server that is getting our DNS queries shall not be able to identify the source of the query. -There is more than one way to ``identify'' the source of the query. We +There is more than one way to "`identify`" the source of the query. We only mean the source as in the IP address that made the DNS query. This second requirement is what ODoH is trying to solve. ODoH tries to separate the identity of the source of the DNS query from the query -itself. ODoH stands for oblivous DoH. It add an ``oblivious'' proxy in +itself. ODoH stands for oblivous DoH. It add an "`oblivious`" proxy in middle of the source of the DNS query and the server. This way the proxy can send the queries in bulk for example to try to mask who sent what when. I’m summarizing here but what ODoH is trying to do can be @@ -81,14 +81,14 @@ Below you can see https://datatracker.ietf.org/doc/rfc9230/[ripped straight from RFC 9230] The main problem with this sort of a solution is that there is always an -element of ``trust-me-bruh'' to the whole situation. +element of "`trust-me-bruh`" to the whole situation. * How can we trust that the proxy provider and the server are not colluding? We could run our own oblivious proxy but then if it’s just you and your friends using the proxy, then your proxy is not obfuscating much, is it -now? And then there is the ``oblivious'' aspect of the solution. How can +now? And then there is the "`oblivious`" aspect of the solution. How can we enforce that? How can you verify that? .... @@ -106,8 +106,8 @@ monopolize access to DNS queries, I would propose ODoH. It has enough mumbo jumbo tech jargon(end-to-end-encrypted, …) to throw off your average layman and lul them into a false sense of security and privacy but it doesnt prevent the proxy and server provider from -colluding. After all the technnical jargon, you end up with ``it’s -safe'' and ``it’s private'' because ``you can trust us''. +colluding. After all the technnical jargon, you end up with "`it’s +safe`" and "`it’s private`" because "`you can trust us`". Now we can see that DoH, DoT and ODoH are all better than baseline DNS queries over UDP without encryption but they can’t satisfy both of our @@ -118,17 +118,17 @@ requirements. Now let’s talk about the solution I at the time of writing this blog post. -DoH or DoT is good enough to satisfy `Requirement001` but they need -something a little extra to be able to satisfy `Requirement002`. +DoH or DoT is good enough to satisfy `+Requirement001+` but they need +something a little extra to be able to satisfy `+Requirement002+`. For that, we use an anonymizing network like tor. DoT and DoH both work over TCP so we can use any SOCKS5 proxy here that ends up being a Tor proxy. What I mean is you can use a the Tor running on your host or you -can use `ssh -L` to use Tor running on a VPS. That way, your internet +can use `+ssh -L+` to use Tor running on a VPS. That way, your internet proviedr can’t know you’re using Tor at all. With your DNS queries going -over Tor, we can satisfy `Requirement002`. Tor is not the only solution -here but I use Tor. There is more than one anonimyzing network out there -and there are protocols that do this also. +over Tor, we can satisfy `+Requirement002+`. Tor is not the only +solution here but I use Tor. There is more than one anonimyzing network +out there and there are protocols that do this also. Right now we have an outline in our head: @@ -142,26 +142,26 @@ SOCKS5 and not SOCKS4 but that’s another can of worms) There is more than one way to do this but I have decided to use https://github.com/DNSCrypt/dnscrypt-proxy[dnscrypt-proxy]. We will not be using dnscrypt for the dnscrypt protocol though you could elect to -use that as the underlying DNS protocol. `dnscrypt-proxy` lets’s us use -a SOCKS5 proxy through which the DNS queries will be sent. We will use a -Tor SOCKS5 proxy here. You can choose which protocols should be enabled -and which ones should be disabled. There are two points: +use that as the underlying DNS protocol. `+dnscrypt-proxy+` lets’s us +use a SOCKS5 proxy through which the DNS queries will be sent. We will +use a Tor SOCKS5 proxy here. You can choose which protocols should be +enabled and which ones should be disabled. There are two points: * one, enable the tcp only option, since we dont want to use plain jane UDP queries. -* two, I have asked `dnscrypt-proxy` to only use DNS servers that +* two, I have asked `+dnscrypt-proxy+` to only use DNS servers that support DNSSEC. I recommend going through all the available options in the -`dnscrypt-proxy.toml` file. It is one of those config files with +`+dnscrypt-proxy.toml+` file. It is one of those config files with comments so it’s pretty sweet. There are quite a few useful options in there that you might care about depending on your needs. ==== Implementation -Right now I run `dnscrypt-proxy` on a small alpine linux VM. I made it +Right now I run `+dnscrypt-proxy+` on a small alpine linux VM. I made it fancier by running the VM on a tmpfs storage pool. Basically mine is -running entirely on RAM. I used to have `dnscrypt-proxy` running on a +running entirely on RAM. I used to have `+dnscrypt-proxy+` running on a raspberry pi and had my openwrt router forward DNS queries to that raspberry pi. There is obviously no best solution here. Just pick one that works for you. Here you can find the vagrantfile I use for the DNS @@ -227,11 +227,11 @@ end ---- It’s pretty straightforward. We use an alpine linux VM as base. Make a -new interface on the VM with a static IP and have `dnscrypt-proxy` +new interface on the VM with a static IP and have `+dnscrypt-proxy+` receive DNS queries through that interface and IP only. I don’t change the port number(53) because of certain applications(you know who you are) refusing to accept port for a DNS server’s address. You could also -make it spicier by using `privoxy`. Maybe I make a post about that +make it spicier by using `+privoxy+`. Maybe I make a post about that later. timestamp:1708814484 diff --git a/mds/NTP.txt b/mds/NTP.txt index ebb7997..de4557e 100644 --- a/mds/NTP.txt +++ b/mds/NTP.txt @@ -4,7 +4,7 @@ Well for this one I will be talking a bit about NTP and NTS. Unlike the DNS post there isn’t much going on here. NTP is plain-text, NTS uses TLS so if our requests are tampered with, we -can know. There is the ``oooh, you cant see what I’m sending now'' but +can know. There is the "`oooh, you cant see what I’m sending now`" but in this case its NTP so the content being secret is not necessarily more important than making sure the content has not been modified(guarantee of integrity). @@ -179,7 +179,7 @@ SOCKS5 but that’s a trivial matter. What is not trivial, however, is how NTS and NTP work, and by that I mean you will still have to ask a server to tell you the time. Doing so over Tor or other anonymizing networks should be fine but we can choose to try out another method of doing -things. Enter `sdwdate` +things. Enter `+sdwdate+` ==== sdwdate @@ -196,7 +196,7 @@ done, the larger user pool of NTS/NTP will offer more protection against the smaller userbase of sdwdate. sdwdate gives a table of comparison between itself and NTP. Let’s take at look at that: -Let’s take a look at `sdwdate`. It is a roller-coaster. And I do mean +Let’s take a look at `+sdwdate+`. It is a roller-coaster. And I do mean that. So don’t make up your mind until the very end. There is a comparison between NTP and sdwdate made https://www.kicksecure.com/wiki/Sdwdate#Sdwdate_vs_NTP[here] by @@ -246,7 +246,7 @@ directory? Second, what does that even mean? And third, who is writing these? The only kind of people who make this sort of mistake are people who use MS Windows more than Linux. This is official kicksecure documentation. You have Windows users writing these for the ultra secure -and hardened ``Linux'', I’ll say it again, ``Linux'', distro? +and hardened "`Linux`", I’ll say it again, "`Linux`", distro? * proxy support: again, NTS uses TCP so it supports SOCKS5 proxies as well but for whatever reason we are comparing against NTP(though whether we are comparing against the protocol or an implementation is something @@ -262,8 +262,8 @@ implementations and the protocols. In conclusion, why is that table even there? What purpose does it even serve? If we were going to base our judgement on the documentation provided on -kicksecure’s website, I am sorry to say that `sdwdate` does a very poor -job but fortunately that’s not all there is to it. +kicksecure’s website, I am sorry to say that `+sdwdate+` does a very +poor job but fortunately that’s not all there is to it. Now let’s go take a look at the github README for the project: @@ -287,7 +287,7 @@ proxy in which case the IP address will be that of the exit node or none at all. Now we know we definitely are dealing with a very promising solution. -`sdwdate' extracts the time stamp in the http header so we are not +'`sdwdate`' extracts the time stamp in the http header so we are not asking a known NTP server about the time, we are just doing a normal http request. diff --git a/mds/cstruct2luatable.txt b/mds/cstruct2luatable.txt index e95cc6b..8b4b7a1 100644 --- a/mds/cstruct2luatable.txt +++ b/mds/cstruct2luatable.txt @@ -5,9 +5,10 @@ For this tutorial we’ll change a C struct into a Lua table. The structure we’ll be using won’t be the simplest structure you’ll come across in the wild so hopefully the tutorial will do a little more than -just cover the basics. We’ll add the structures as `userdata` and not as -`lightuserdata`. Because of that, we won’t have to manage the memory -ourselves, instead we will let Lua’s GC handle it for us. Disclaimer: +just cover the basics. We’ll add the structures as `+userdata+` and not +as `+lightuserdata+`. Because of that, we won’t have to manage the +memory ourselves, instead we will let Lua’s GC handle it for us. +Disclaimer: * This turotial is not supposed to be a full dive into lua tables, metatables and their implementation or behavior. The tutorial is meant @@ -33,8 +34,8 @@ https://github.com/bloodstalker/blogstuff/tree/master/src/cstruct2luatbale[here] === C Structs First let’s take a look at the C structures we’ll be using. The primary -structure is called `a_t` which has, inside it, two more structures -`b_t` and `c_t`: +structure is called `+a_t+` which has, inside it, two more structures +`+b_t+` and `+c_t+`: [source,c] ---- @@ -67,23 +68,23 @@ The structures are purely artificial. === First Step: Lua Types -First let’s take a look at `a_t` and decide how we want to do this. -`a_t` has five members: +First let’s take a look at `+a_t+` and decide how we want to do this. +`+a_t+` has five members: -* `a_int` which in Lua we can turn into an `integer`. -* `a_float` which we can turn into a `number`. -* `a_string` which will be a Lua `string`. -* `a_p` which is a pointer to another structure. As previously stated, -we will turn this into a `userdata`. -* `a_pp` which is a double pointer. We will turn this into a table of -`userdata`. +* `+a_int+` which in Lua we can turn into an `+integer+`. +* `+a_float+` which we can turn into a `+number+`. +* `+a_string+` which will be a Lua `+string+`. +* `+a_p+` which is a pointer to another structure. As previously stated, +we will turn this into a `+userdata+`. +* `+a_pp+` which is a double pointer. We will turn this into a table of +`+userdata+`. === Second Step: Helper Functions Now let’s think about what we need to do. First we need to think about how we will be using our structures. For this example we will go with a pointer, i.e., our library code will get a pointer to the structure so -we need to turn the table into `userdata`. Next, we want to be able to +we need to turn the table into `+userdata+`. Next, we want to be able to push and pop our new table from the Lua stack. We can also use Lua’s type check to make sure our library code complains when someone passes a bad type. We will also add functions for pushing the structure arguments @@ -107,8 +108,8 @@ static a_t* pop_a_t(lua_State* ls, int index) { We check to see if the stack index we are getting is actually a userdata type and then check the type of the userdata we get to make sure we get the right userdata type. We check the type of the userdata by checking -its metatable. We will get into that later. This amounts to our ``pop'' -functionality for our new type. Now let’s write a ``push'': The function +its metatable. We will get into that later. This amounts to our "`pop`" +functionality for our new type. Now let’s write a "`push`": The function will look like this: [source,c] @@ -127,25 +128,26 @@ a_t* push_a_t(lua_State* ls) { } ---- -Notice that we reserve new memory here using `lua_newuserdata` instead -of `malloc` or what have you. This way we leave it up to Lua to handle +Notice that we reserve new memory here using `+lua_newuserdata+` instead +of `+malloc+` or what have you. This way we leave it up to Lua to handle the GC(in the real world however, you might not have the luxury of doing so). Now let’s talk about what we are actually doing here: First off we -reserve memory for our new table using `lua_newuserdata`. Then we get +reserve memory for our new table using `+lua_newuserdata+`. Then we get and set the metatable that we will register later in the tutorial with Lua for our newly constructed userdata. Setting the metatable is our way of telling Lua what our userdata is, what methods it has along with some customizations that we will talk about later. We need to have a method of retrieving our full userdata when we need it. We do that by -registering our userdata inside `LUA_REGISTRYINDEX`. We will need a key. -for simplicity’s sake we use the pointer that `lua_newuserdata` returned -as the key for each new full userdata. As for the value of the key, we -will use the full userdata itself. That’s why we are using -`lua_pushvalue`. Please note that lua doesn’t have a `push_fulluserdata` -function and we can’t just pass the pointer to our userdata as the key -since that would just be a lihgtuserdata and not a userdata so we just -copy the fulluserdata onto the stack as the value for the key. Lastly we -just set our key-value pair with `LUA_REGISTRYINDEX`. +registering our userdata inside `+LUA_REGISTRYINDEX+`. We will need a +key. for simplicity’s sake we use the pointer that `+lua_newuserdata+` +returned as the key for each new full userdata. As for the value of the +key, we will use the full userdata itself. That’s why we are using +`+lua_pushvalue+`. Please note that lua doesn’t have a +`+push_fulluserdata+` function and we can’t just pass the pointer to our +userdata as the key since that would just be a lihgtuserdata and not a +userdata so we just copy the fulluserdata onto the stack as the value +for the key. Lastly we just set our key-value pair with +`+LUA_REGISTRYINDEX+`. Next we will write a function that pushes the fields of the structure onto the stack: @@ -194,10 +196,10 @@ int new_a_t(lua_State* ls) { } ---- -We just push an `a_t` on top of stack and then populate the fields with -the values already on top of stack. The fact that we wrote tha two +We just push an `+a_t+` on top of stack and then populate the fields +with the values already on top of stack. The fact that we wrote tha two separate functions for pushing the arguments and returning a new table -instance means we can use `new_a_t` as a constructor from lua as well. +instance means we can use `+new_a_t+` as a constructor from lua as well. We’ll later talk about that. === Third Step: Setters and Getters @@ -244,8 +246,8 @@ static int getter_a_p(lua_State *ls) { } ---- -For the sake of laziness, let’s assume `a_t->a_int` denotes the number -of entries in `a_t->a_pp`. +For the sake of laziness, let’s assume `+a_t->a_int+` denotes the number +of entries in `+a_t->a_pp+`. [source,c] ---- @@ -272,11 +274,11 @@ static int getter_a_pp(lua_State* ls) { } ---- -Since we register all our tables with `LUA_REGISTRYINDEX` we just +Since we register all our tables with `+LUA_REGISTRYINDEX+` we just retreive the key which in our case, conviniently is the pointer to the userdata and retrieve the value(our userdata). As you can see, for setters we are assuming that the table itself is being passed as the -first argument(the `pop_a_t` line assumes that). +first argument(the `+pop_a_t+` line assumes that). Our setters methods would be called like this in Lua: @@ -286,12 +288,12 @@ local a = a_t() a:set_a_int(my_int) ---- -The `:` operator in Lua is syntactic sugar. The second line from the -above snippet is equivalent to `a.set_a_int(self, my_int)`. As you can +The `+:+` operator in Lua is syntactic sugar. The second line from the +above snippet is equivalent to `+a.set_a_int(self, my_int)+`. As you can see, the table itself will always be our first argument. That’s why our assumption above will always be true if the lua code is well-formed. -We do the same steps above for `b_t` and `c_t` getter functions. +We do the same steps above for `+b_t+` and `+c_t+` getter functions. Now let’s look at our setters: @@ -361,8 +363,8 @@ static const luaL_Reg a_t_meta[] = {{0, 0}}; We just list the functions we want to be accessible inside Lua code. Lua expects the C functions that we register with Lua to have the form -`(int)(func_ptr*)(lua_State*)`. Also, it’s a good idea to take a look at -the metatable events that Lua 5.3 supports +`+(int)(func_ptr*)(lua_State*)+`. Also, it’s a good idea to take a look +at the metatable events that Lua 5.3 supports http://lua-users.org/wiki/MetatableEvents[here]. They provide customization options for our new table type(as an example we get the same functionality as C++ where we get to define what an operator does @@ -394,14 +396,15 @@ Please note that we are registering the metatable as a global. It is generally not recommended to do so.Why you ask? Adding a new enrty to the global table in Lua means you are already reserving that keyword, so if another library also needs that key, you are going to have lots of -fun(the term `fun` here is borrowed from the Dwarf Fortress literature). -Entries in the global table will require Lua to look things up in the -global table so it slows things down a bit, though whether the slow-down -is signifacant enough really depends on you and your requirements. +fun(the term `+fun+` here is borrowed from the Dwarf Fortress +literature). Entries in the global table will require Lua to look things +up in the global table so it slows things down a bit, though whether the +slow-down is signifacant enough really depends on you and your +requirements. We are almost done with our new table but there is one thing remaining and that is our table doesn’t have a cozy constructor(Cozy constructors -are not a thing. Seriously. I just made it up.). We can use our `new` +are not a thing. Seriously. I just made it up.). We can use our `+new+` function as a constructor, we have registered it with our metatable, but it requires you to pass all the arguments at the time of construction. Sometimes it’s convinient to hold off on passing all or some of the args @@ -414,17 +417,18 @@ something called metatable events. Eeach event has a string key and the value is whatever you put as the value. The values are used whenever that event happens. Some the events are: -* `__call` -* `__pairs` -* `__sub` -* `__add` -* `__gc` The `__sub` event is triggered when your table is the operand -of a suntraction operator. `__gc` is used when lua want to dispose of -the table so if you are handling the memory yourself, in contrast to -letting Lua handle it for you, here’s where you free memory. The events -are a powerful tool that help us customize how our new table behaves. - -For a constructor, we will use the `__call` event. That means when +* `+__call+` +* `+__pairs+` +* `+__sub+` +* `+__add+` +* `+__gc+` The `+__sub+` event is triggered when your table is the +operand of a suntraction operator. `+__gc+` is used when lua want to +dispose of the table so if you are handling the memory yourself, in +contrast to letting Lua handle it for you, here’s where you free memory. +The events are a powerful tool that help us customize how our new table +behaves. + +For a constructor, we will use the `+__call+` event. That means when someone calls our metatable in Lua, like this(call event is triggered when our table is called, syntactically speaking): @@ -433,10 +437,10 @@ when our table is called, syntactically speaking): local a = a_t() ---- -`a` will become a new instance of our table. We can add a value for our -metatable’s `__call` key from either Lua or C. Since we are talking -about Lua and haven’t almost written anything in Lua, let’s do it in -Lua: +`+a+` will become a new instance of our table. We can add a value for +our metatable’s `+__call+` key from either Lua or C. Since we are +talking about Lua and haven’t almost written anything in Lua, let’s do +it in Lua: [source,lua] ---- @@ -449,8 +453,8 @@ setmetatable(a_t, {__call = ) ---- -We use our `new` method which we previously registered for our -metatable. Note that Lua will pass `nil` for the argument if we don’t +We use our `+new+` method which we previously registered for our +metatable. Note that Lua will pass `+nil+` for the argument if we don’t provide any. That’s how our cozy constructor works. === Final Words @@ -470,10 +474,10 @@ with the possibility of me having to do the same for a lot more C structs. I just couldn’t bring myself to do it manually for that many C structs so I decided to work on a code generator that does that for me. The result is https://github.com/bloodstalker/luatablegen[luatablegen]. -`luatablegen` is a simple script that takes the description of your C +`+luatablegen+` is a simple script that takes the description of your C structures in an XML file and generates the C code for your new tables and metatables. It does everything we did by hand automatically for us. -`lautablegen` is in its early stages, so again, any feedback or help +`+lautablegen+` is in its early stages, so again, any feedback or help will be appreciated. timestamp:1705630055 diff --git a/mds/disposablefirefox.md b/mds/disposablefirefox.md index 98633fe..9834ab2 100644 --- a/mds/disposablefirefox.md +++ b/mds/disposablefirefox.md @@ -1,56 +1,56 @@ # Making a Disposable Firefox Instance We want to make a disposable firefox instance.<br/> -Why firefox? well the only other choice is chromium really. Mozilla are no choir boys either. Basically we are choosing between the lesser of two evils here. Firefox it is then.<br/> -Qutebrowser and netsurf are solid but for this one, I want something that has more compatability.<br/> +Why firefox? well the only other choice is chromium really. Mozilla are no choir boys either. Basically we are choosing between the lesser of two evils here. There is also the who gogole killing off manifest v2.<br/> +Qutebrowser and netsurf are solid but for this one, we will choose something that has more compatibility.<br/> Now let's talk about the requirements and goals for this lil undertaking of ours: ## Requirements and Goals We want: -- the instance to be ephemeral. This will prevent any persistant threat to remain on the VM. +- the instance to be ephemeral. This will prevent any persistent threat to remain on the VM. - the instance to be isolated from the host. - to prevent our IP address from being revealed to the websites we visit. We will not be: - doing any fingerprint-resisting. In case someone wants to do it, here's a good place to start: [arkenfox's user.js](https://github.com/arkenfox/user.js/) -- we are trying to keep our IP from being revealed to the websites we visit. We don't care whether a VPN provider can be subpoenaed or not. Otherwise, needless to say, use your own VPN server but will limit the IP choices. trade-offs people, trade-offs. +- we are trying to keep our IP from being revealed to the websites we visit. We don't care whether a VPN provider can be subpoenaed or not. Otherwise, needless to say, use your own VPN server but that will limit the IP choices you have. Trade-offs people, trade-offs. There is also the better choice, imho, which is use a SOCKS5 proxy. ## Implementation ### Isolation and Sandboxing -We will be practicing compertmentalization. This makes it harder for threats to spread. There are more than one way to do this in the current Linux landscape. We will be using a virtual machine and not a container. Needless to say, defense in depth is a good practice so in case your threat model calls for it, one could run firefox in a container inside the VM but for our purposes running inside a virtual machine is enough.<br/> -To streamline the process, we will be using vagrant to provision the VM. like already mentioned, we will use Vagrant's plugin for libvirt to build/manage the VM which in turn will use qemu/kvm as the hypervisor.<br/> -We value transparency so we will use an open-source stack for the virtualization: Vagrant+libvirt+qemu/kvm<br/> +We will be practicing compartmentalization. This makes it harder for threats to spread. There are more than one way to do this in the current Linux landscape. We will be using a virtual machine and not a container. Needless to say, defense in depth is a good practice so in case your threat model calls for it, one could run firefox in a container inside the VM but for our purposes running inside a virtual machine is enough.<br/> +To streamline the process, we will be using vagrant to provision the VM. Like already mentioned, we will use Vagrant's plugin for libvirt to build/manage the VM which in turn will use qemu/kvm as the hypervisor.<br/> +We value transparency so we will use an open-source stack for the virtualisation: Vagrant+libvirt+qemu/kvm<br/> The benefits of using an open-source backend include: -- we don't have to worry about any backdoors in the software +- we don't have to worry about any backdoors in the software. There is a big difference between "they **probably** don't put backdoors into their software" and "there are no backdoors on this piece of software"(the xz incident non-withstanding) - we don't have to deal with very late and lackluster responses to security vulnerabilities -Yes. we just took shots at two specific hypervisors. If you know, you know.<br/> +Yes. We just took shots at two specific hypervisors. If you know, you know.<br/> Now lets move on to the base for the VM.<br/> -We need something small for two reasons: a smaller attack surface and a smaller memory footprint(yes. a smaller memory-footrpint. we will talk about this a bit later).<br/> +We need something small for two reasons: a smaller attack surface and a smaller memory footprint(yes. A smaller memory-footprint. We will talk about this a bit later).<br/> So the choice is simple if we are thinking of picking a linux distro. We use an alpine linux base image. We could pick an openbsd base. That has the added benefit of the host and the guest not running the same OS which makes it harder for the threats to break isolation but for the current iteration we will be using alpine linux.<br/> ### IP Address Leak prevention The choice here is rather simple:<br/> We either decide to use a VPN or a SOCKS5 proxy. You could make your own VPN and or SOCKS5 proxy. This IS the more secure option but will limit the ip choices we have. If your threat model calls for it, then by all means, take that route. For my purposes using a VPN provider is enough. We will be using mullvad vpn. Specifically, we will be using the openvpn config that mullvad generates for us. We will not be using the mullvad vpn app mostly because a VPN app is creepy.<br/> -We will also be implementing a kill-switch for the VPN. in case the VPN fails at any point, we don't want to leak our IP address. A kill-switch makes sure nothing is sent out when the VPN fails. -We will use ufw to implement the kill-switch feature.<br/> +We will also be implementing a kill-switch for the VPN. In case the VPN fails at any point, we don't want to leak our IP address. A kill-switch makes sure nothing is sent out when the VPN fails. +We will use ufw to implement the kill-switch feature. This is similar to what [tails OS does](https://tails.net/contribute/design/#index18h3) as in, it tries to route everything through tor but it also blocks any non-tor traffic, thus ensuring there are no leaks. We will be doing the same.<br/> ### Non-Persistance -We are running inside a VM so in order to achieve non-persistance we could just make a new VM instance, run that and after we are done with the instance, we can just destroy it. We will be doing just that but we will be using a `tmpfs` filesystem and put our VM's disk on that. This has a couple of benefits: +We are running inside a VM so in order to achieve non-persistence we could just make a new VM instance, run that and after we are done with the instance, we can just destroy it. We will be doing just that but we will be using a `tmpfs` filesystem and put our VM's disk on that. This has a couple of benefits: - RAM is faster than disk. Even faster than an nvme drive - RAM is volatile -One thing to be wary of is swap. In our case we will be using the newser `tmpfs` which will use swap if we go over our disk limit so keep this in mind while making the tmpfs mount. Please note that there are ways around this as well. One could use the older `ramfs` but in my case this is not necessary since I'm using zram for my host's swap solution. This means that the swap space will be in the RAM itself so hitting the swap will still mean we never hit the disk.<br/> +One thing to be wary of is swap. In our case we will be using the newer `tmpfs` which will use swap if we go over our disk limit so keep this in mind while making the tmpfs mount. Please note that there are ways around this as well. One could use the older `ramfs` but in my case this is not necessary since I'm using zram for my host's swap solution. This means that the swap space will be on the RAM itself so hitting the swap will still mean we never hit the disk.<br/> To mount a tmpfs, we can run: @@ -58,6 +58,7 @@ To mount a tmpfs, we can run: sudo mount -t tmpfs -o size=4096M tmpfs /tmp/tmpfs ``` +Remember we talked about a smaller memory footprint? This is why. An alpine VM with firefox on top of it is smaller both in disk-size and memory used(mostly because of alpine using libmusl instead of glibc).<br/> The above command will mount a 4GB tmpfs on `/tmp/tmpfs`.<br/> Next we want to create a new storage pool for libvirt so that we can specify the VM to use that in Vagrant. @@ -84,45 +85,58 @@ ufw default deny outgoing ufw allow in on tun0 ufw allow out on tun0 # enable libvirt bridge -ufw allow in on eth0 from 192.168.121.1 -ufw allow out on eth0 to 192.168.121.1 +ufw allow in on eth0 from 192.168.121.1 proto tcp +ufw allow out on eth0 to 192.168.121.1 proto tcp # server block -ufw allow out on eth0 to 185.204.1.174 port 443 -ufw allow in on eth0 from 185.204.1.174 port 443 -ufw allow out on eth0 to 185.204.1.176 port 443 -ufw allow in on eth0 from 185.204.1.176 port 443 -ufw allow out on eth0 to 185.204.1.172 port 443 -ufw allow in on eth0 from 185.204.1.172 port 443 -ufw allow out on eth0 to 185.204.1.171 port 443 -ufw allow in on eth0 from 185.204.1.171 port 443 -ufw allow out on eth0 to 185.212.149.201 port 443 -ufw allow in on eth0 from 185.212.149.201 port 443 -ufw allow out on eth0 to 185.204.1.173 port 443 -ufw allow in on eth0 from 185.204.1.173 port 443 -ufw allow out on eth0 to 193.138.7.237 port 443 -ufw allow in on eth0 from 193.138.7.237 port 443 -ufw allow out on eth0 to 193.138.7.217 port 443 -ufw allow in on eth0 from 193.138.7.217 port 443 -ufw allow out on eth0 to 185.204.1.175 port 443 -ufw allow in on eth0 from 185.204.1.175 port 443 +ufw allow out on eth0 to 185.204.1.174 port 443 proto tcp +ufw allow in on eth0 from 185.204.1.174 port 443 proto tcp +ufw allow out on eth0 to 185.204.1.176 port 443 proto tcp +ufw allow in on eth0 from 185.204.1.176 port 443 proto tcp +ufw allow out on eth0 to 185.204.1.172 port 443 proto tcp +ufw allow in on eth0 from 185.204.1.172 port 443 proto tcp +ufw allow out on eth0 to 185.204.1.171 port 443 proto tcp +ufw allow in on eth0 from 185.204.1.171 port 443 proto tcp +ufw allow out on eth0 to 185.212.149.201 port 443 proto tcp +ufw allow in on eth0 from 185.212.149.201 port 443 proto tcp +ufw allow out on eth0 to 185.204.1.173 port 443 proto tcp +ufw allow in on eth0 from 185.204.1.173 port 443 proto tcp +ufw allow out on eth0 to 193.138.7.237 port 443 proto tcp +ufw allow in on eth0 from 193.138.7.237 port 443 proto tcp +ufw allow out on eth0 to 193.138.7.217 port 443 proto tcp +ufw allow in on eth0 from 193.138.7.217 port 443 proto tcp +ufw allow out on eth0 to 185.204.1.175 port 443 proto tcp +ufw allow in on eth0 from 185.204.1.175 port 443 proto tcp echo y | ufw enable ``` -First off we forcefully reset ufw. This makes sure we ware starting from a known state.<br/> -Second, we disable all incoming and outgoing traffic. This makes sure our default policy for unforseen scenarios is to deny traffic leaving the VM.<br/> +First, we forcefully reset ufw. This makes sure we are starting from a known state.<br/> +Second, we disable all incoming and outgoing traffic. This makes sure our default policy for some unforseen scenario is to deny traffic leaving the VM.<br/> Then we allow traffic through the VPN interface, tun0.<br/> -Finally, in my case and beacuse Vagrant, we allow traffic to and from the libvirt bridge, which in my case in 192.168.121.1.<br/> +Finally, in my case and because of libvirt, we allow traffic to and from the libvirt bridge, which in my case in 192.168.121.1.<br/> Then we add two rules for each VPN server. One for incoming and one for outgoing traffic: ```sh -ufw allow out on eth0 to 185.204.1.174 port 443 -ufw allow in on eth0 from 185.204.1.174 port 443 +ufw allow out on eth0 to 185.204.1.174 port 443 proto tcp +ufw allow in on eth0 from 185.204.1.174 port 443 proto tcp ``` `eth0` is the interface that originally had internet access. Now after denying it any access, we are allowing it to only talk to the VPN server on the server's port 443.<br/> -Please keep in mind that the addresses, the port and even the protocol(tcp/udp) will depend on the VPN server.<br/> +Needless to say, the IP addresses, the ports and the protocol(tcp/udp which we are not having ufw enforce) will depend on the VPN server and your provider.<br/> +Note: make sure you are not doing DNS request out-of-band in regards to your VPN. This seems to be a common mistake and some VPN providers don't enable sending the DNS requests through the VPN tunnel by default which means your actual traffic goes through the tunnel but you are kindly letting your ISP(if you have not changed your host's DNS servers) know where you are sending your traffic to.<br/> -after setting the rules we enable ufw.<br/> +After setting the rules, we enable ufw.<br/> + +### Sudo-less NTFS + +In order to make the process more streamlined and not mistakenly keep an instance alive we need to have a sudo-less NTFS mount for the VM.<br/> +Without sudo-less NTFS, we would have to type in the sudo password twice, once when the VM is being brought up and once when it is being destroyed. Imagine a scenario when you close the disposable firefox VM, thinking that is gone but in reality it needs you to type in the sudo password to destroy it, thus, keeping the instance alive.<br/> +The solution is simple. We add the following to `/etc/exports`: + +```sh +"/home/user/share/nfs" 192.168.121.0/24(rw,no_subtree_check,all_squash,anonuid=1000,anongid=1000) +``` + +This will enable the VM to access `/home/user/share/nfs` without needing sudo.<br/> ## The Vagrantfile @@ -148,15 +162,19 @@ Vagrant.configure('2') do |config| config.ssh.connect_timeout = 15 config.vm.provider 'libvirt' do |libvirt| - libvirt.storage_pool_name = 'tmpfs_pool' + # name of the storage pool, mine is ramdisk. + libvirt.storage_pool_name = 'ramdisk' libvirt.default_prefix = 'disposable-' libvirt.driver = 'kvm' + # amount of memory to allocate to the VM libvirt.memory = '3076' + # amount of logical CPU cores to allocate to the VM libvirt.cpus = 6 libvirt.sound_type = nil libvirt.qemuargs value: '-nographic' libvirt.qemuargs value: '-nodefaults' libvirt.qemuargs value: '-no-user-config' + # enabling a serial console just in case libvirt.qemuargs value: '-serial' libvirt.qemuargs value: 'pty' libvirt.qemuargs value: '-sandbox' @@ -168,11 +186,7 @@ Vagrant.configure('2') do |config| set -ex sudo apk update && \ sudo apk upgrade - sudo apk add tor torsocks firefox-esr xauth font-dejavu wget openvpn unzip iptables bubblewrap apparmor ufw nfs-utils haveged tzdata - wget -q https://addons.mozilla.org/firefox/downloads/file/4228676/foxyproxy_standard-8.9.xpi - mv foxyproxy_standard-8.9.xpi foxyproxy@eric.h.jung.xpi - mkdir -p ~/.mozilla/extensions/{ec8030f7-c20a-464f-9b0e-13a3a9e97384}/ - mv foxyproxy@eric.h.jung.xpi ~/.mozilla/extensions/{ec8030f7-c20a-464f-9b0e-13a3a9e97384}/ + sudo apk add firefox-esr xauth font-dejavu wget openvpn unzip iptables ufw nfs-utils haveged tzdata mkdir -p /vagrant && \ sudo mount -t nfs 192.168.121.1:/home/devi/share/nfs /vagrant SHELL @@ -183,11 +197,7 @@ Vagrant.configure('2') do |config| sed -i 's/^X11Forwarding .*/X11Forwarding yes/' /etc/ssh/sshd_config rc-service sshd restart - #rc-update add tor default - cp /vagrant/torrc /etc/tor/torrc - rc-service tor start - - ln -s /usr/share/zoneinfo/UTC /etc/localtime + ln -fs /usr/share/zoneinfo/UTC /etc/localtime #rc-update add openvpn default mkdir -p /tmp/mullvad/ && \ @@ -204,8 +214,6 @@ Vagrant.configure('2') do |config| sysctl -p /etc/sysctl.d/ipv4.conf rc-service openvpn start || true sleep 1 - - cp /vagrant/bw_firefox /usr/bin/ SHELL config.vm.provision 'kill-switch', communicator_required: false, type: 'shell', name: 'kill-switch', privileged: true, inline: <<-SHELL @@ -216,28 +224,28 @@ Vagrant.configure('2') do |config| ufw default deny outgoing ufw allow in on tun0 ufw allow out on tun0 - # enable libvirt bridge - ufw allow in on eth0 from 192.168.121.1 - ufw allow out on eth0 to 192.168.121.1 + # allow local traffic through the libvirt bridge + ufw allow in on eth0 from 192.168.121.1 proto tcp + ufw allow out on eth0 to 192.168.121.1 proto tcp # server block - ufw allow out on eth0 to 185.204.1.174 port 443 - ufw allow in on eth0 from 185.204.1.174 port 443 - ufw allow out on eth0 to 185.204.1.176 port 443 - ufw allow in on eth0 from 185.204.1.176 port 443 - ufw allow out on eth0 to 185.204.1.172 port 443 - ufw allow in on eth0 from 185.204.1.172 port 443 - ufw allow out on eth0 to 185.204.1.171 port 443 - ufw allow in on eth0 from 185.204.1.171 port 443 - ufw allow out on eth0 to 185.212.149.201 port 443 - ufw allow in on eth0 from 185.212.149.201 port 443 - ufw allow out on eth0 to 185.204.1.173 port 443 - ufw allow in on eth0 from 185.204.1.173 port 443 - ufw allow out on eth0 to 193.138.7.237 port 443 - ufw allow in on eth0 from 193.138.7.237 port 443 - ufw allow out on eth0 to 193.138.7.217 port 443 - ufw allow in on eth0 from 193.138.7.217 port 443 - ufw allow out on eth0 to 185.204.1.175 port 443 - ufw allow in on eth0 from 185.204.1.175 port 443 + ufw allow out on eth0 to 185.204.1.174 port 443 proto tcp + ufw allow in on eth0 from 185.204.1.174 port 443 proto tcp + ufw allow out on eth0 to 185.204.1.176 port 443 proto tcp + ufw allow in on eth0 from 185.204.1.176 port 443 proto tcp + ufw allow out on eth0 to 185.204.1.172 port 443 proto tcp + ufw allow in on eth0 from 185.204.1.172 port 443 proto tcp + ufw allow out on eth0 to 185.204.1.171 port 443 proto tcp + ufw allow in on eth0 from 185.204.1.171 port 443 proto tcp + ufw allow out on eth0 to 185.212.149.201 port 443 proto tcp + ufw allow in on eth0 from 185.212.149.201 port 443 proto tcp + ufw allow out on eth0 to 185.204.1.173 port 443 proto tcp + ufw allow in on eth0 from 185.204.1.173 port 443 proto tcp + ufw allow out on eth0 to 193.138.7.237 port 443 proto tcp + ufw allow in on eth0 from 193.138.7.237 port 443 proto tcp + ufw allow out on eth0 to 193.138.7.217 port 443 proto tcp + ufw allow in on eth0 from 193.138.7.217 port 443 proto tcp + ufw allow out on eth0 to 185.204.1.175 port 443 proto tcp + ufw allow in on eth0 from 185.204.1.175 port 443 proto tcp echo y | ufw enable SHELL @@ -249,34 +257,44 @@ Vagrant.configure('2') do |config| end ``` -First let's talk about how we interface with our firefox instance. ssh or spice?<br/> -I have gone with ssh. In our case we use ssh's X11 forwarding feature. This will allow us to keep the size of the VM small. +### Provisioning + +We will be using the vagrant shell provisioner to prepare the VM.<br/> +The first provisioner names `update-upgrade` does what the name implies. It installs the required packages.<br/> +The next provisioner, `update-upgrade-privileged`, enables X11 forwarding on openssh, sets up openvpn as a service and starts it and finally sets the timezone to UTC.<br/> +The third provisioner, `kill-switch`, sets up our kill-switch using ufw.<br/> +The final provisioner runs the mullvad test for their VPN. Since at this point we have set up the kill-switch we wont leak our IP address to the mullvad website but that's not important since we are using our own IP address to connect to the mullvad VPN servers.<br/> + +### Interface + +how do we interface with our firefox instance. ssh or spice?<br/> +I have gone with ssh. In our case we use ssh's X11 forwarding feature. This choice is made purely out of convenience. You can go with spice.<br/> ### Timezone -We set the VM's timezone to UTC. That's the most generic one. +We set the VM's timezone to UTC because it's generic.<br/> ### haveged haveged is a daemon that provides a source of randomness for our VM. Look [here](https://www.kicksecure.com/wiki/Dev/Entropy#haveged). -### VM Isolation - #### QEMU Sandbox -#### CPU Pinning - -CPU pinning alone is not what we want. We want cpu pinning and then further isolating those cpu cores on the host so that only the VM runs on those cores. This will give us a better performance on the VM side but also provide better security and isolation since this will mitigate side-channel attacks based on the CPU(the spectre/metldown family, the gift that keeps on giving. thanks intel!).<br/> - -### passwordless NFS +From `man 1 qemu`: ```txt -"/home/devi/share/nfs" 192.168.121.0/24(rw,no_subtree_check,all_squash,anonuid=1000,anongid=1000) 172.17.0.0/16(rw,no_subtree_check,all_squash,anonuid=1000,anongid=1000) 10.167.131.0/24(rw,no_subtree_check,all_squash,anonuid=1000,anongid=1000) +-sandbox arg[,obsolete=string][,elevateprivileges=string][,spawn=string][,resourcecontrol=string] + Enable Seccomp mode 2 system call filter. 'on' will enable syscall filtering and 'off' will disable it. The default is 'off'. ``` +#### CPU Pinning + +CPU pinning alone is not what we want. We want cpu pinning and then further isolating those cpu cores on the host so that only the VM runs on those cores. This will give us a better performance on the VM side but also provide better security and isolation since this will mitigate side-channel attacks based on the CPU(the spectre/metldown family, the gift that keeps on giving).<br/> +In my case, I've done what I can on the host-side to mitigate spectre/meltdown but I don't have enough resources to ping 6 logical cores to this VM. If you can spare the resources, by all means, please do.<br/> + ### No Passthrough -We could do a GPU passthrough to use hardware acceleration and be able to view 4k videos with this instance but I did not make this with such applications in mind so we won't be doing that. +We will not be doing any passthroughs. It is not necessarily a choice made because of security, but merely out of a lack of need for the performance benefit that hardware-acceleration brings.<br/> ## Launcher Script @@ -284,17 +302,31 @@ We could do a GPU passthrough to use hardware acceleration and be able to view 4 #!/usr/bin/dash set -x +sigint_handler() { + local ipv4="$1" + xhost -"${ipv4}" + vagrant destroy -f +} + +trap sigint_handler INT +trap sigint_handler TERM + working_directory="/home/devi/devi/vagrantboxes.git/main/disposable/" cd ${working_directory} || exit 1 vagrant up disposable_id=$(vagrant global-status | grep disposable | awk '{print $1}') disposable_ipv4=$(vagrant ssh "${disposable_id}" -c "ip a show eth0 | grep inet | grep -v inet6 | awk '{print \$2}' | cut -d/ -f1 | tr -d '[:space:]'") + +trap 'sigint_handler ${disposable_ipv4}' INT +trap 'sigint_handler ${disposable_ipv4}' TERM + echo "got IPv4 ${disposable_ipv4}" xhost +"${disposable_ipv4}" ssh \ -o StrictHostKeyChecking=no \ -o Compression=no \ + -o UserKnownHostsFile=/dev/null \ -X \ -i".vagrant/machines/default/libvirt/private_key" \ vagrant@"${disposable_ipv4}" \ @@ -303,12 +335,24 @@ xhost -"${disposable_ipv4}" vagrant destroy -f ``` +The script is straightforward. It brings up the VM, and destroys it when the disposable firefox instance is closed.<br/> +Let's look at a couple of things that we are doing here:<br/> + +- The shebang line: we are using `dash`, the debian almquist shell. It has a smaller attack surface. It's small but we don't need all the features of bash or zsh here so we use something "more secure". + +- we add and remove the IP of the VM from the xhost list. This allows the instance to display the firefox window on the host's X server and after it's done, we remove it so we don't end up whitelisting the entire IP range(least privilege principle, remember?). +- we use `-o UserKnownHostsFile=/dev/null` to prevent the VM from adding to the host's known hosts file. There are two reasons why we do this here. One, the IP range is limited, we will eventually end up conflicting with another IP that lives on your hostsfile that was a live and well VM as some point but is now dead so libvirt will reassign its IP address to our disposable instance which will prompt ssh to tell you that it suspects there is something going on which will prevent the ssh command from completing successfully which will in turn result in the VM getting killed. Two, we will stop polluting the hostsfile by all the IPs of the disposable VM instances that we keep creating so that you won't have to deal with the same problem while running other VMs. +- we register a signal handler for `SIGTERM` and `SIGINT` so that we can destroy the VM after we created it and we one of those signals. This helps ensure a higher rate of confidence in the VM getting destroyed. This does not guarantee that. A `SIGKILL` will kill the script and that's that. + ## Notes Regarding the Host -A good deal of security and isolation comes from the host specially in a scenario when you are running a VM on top of the host. This is an entire topic so we won't be getting into it but [here](https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings) is a good place to start. Just because it's only a single line at he end of some random blogpost doesn't mean its not important. Take this seriously.<br/> +A good deal of security and isolation comes from the host specially in a scenario when you are running a VM on top of the host. This is an entirely different topic so we won't be getting into it but [here](https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings) is a good place to start. Just because it's only a single line at the end of some random blogpost doesn't mean its not important. Take this seriously.<br/> + +We are using somebody else's vagrant base image. Supply-chain attacks are a thing so it is very much better to use our own base image.<br/> +As a starting you can look [here](https://github.com/lavabit/robox/tree/master/scripts/alpine319). This is how the base image we are using is created.<br/> <p> - <div class="timestamp">timestamp:1718588927</div> + <div class="timestamp">timestamp:1719428898</div> <div class="version">version:1.0.0</div> <div class="rsslink">https://blog.terminaldweller.com/rss/feed</div> <div class="originalurl">https://raw.githubusercontent.com/terminaldweller/blog/main/mds/disposablefirefox.md</div> diff --git a/mds/howtogetyourSMSonIRC.txt b/mds/howtogetyourSMSonIRC.txt index 438e7b0..e449940 100644 --- a/mds/howtogetyourSMSonIRC.txt +++ b/mds/howtogetyourSMSonIRC.txt @@ -1,6 +1,6 @@ == How to get your SMS on IRC -It’s not really a continuation of the ``one client for everything'' post +It’s not really a continuation of the "`one client for everything`" post but it is in the same vein. Basically, in this post we are going to make it so that we receive our SMS messages on IRC. More specifically, it will send it to a IRC channel. In my case this works and is actually @@ -103,7 +103,7 @@ https://github.com/terminaldweller/sms-webhook[here]. Here’s a brief explanation of what the code does: We launch the irc bot in a goroutine. The web hook server will only respond to POST requests -on `/sms` after a successful basic http authentication. In our case +on `+/sms+` after a successful basic http authentication. In our case there is no reason not to use a randomized username as well. So effectively we will have two secrets this way. You can create a new user in the pocketbase admin panel. Pocketbase comes with a default diff --git a/mds/lazymakefiles.txt b/mds/lazymakefiles.txt index 09e9960..6df821e 100644 --- a/mds/lazymakefiles.txt +++ b/mds/lazymakefiles.txt @@ -28,20 +28,20 @@ You are expected to have the makefile open while reading this. * I will be explaining some of the more, let’s say, esoteric behaviours of make which can get the beginners confused. * gnu make variables are considered macros by C/C++ standards. I will -use the term ``variable'' since it’s what the gnu make documents use. +use the term "`variable`" since it’s what the gnu make documents use. * The makefiles are not supposed to be hands-off. I change bits here and there from project to project. -* The makefile recognizes the following extensions: `.c` and `.cpp`. If -you use different extensions, change the makefile accordingly. +* The makefile recognizes the following extensions: `+.c+` and `+.cpp+`. +If you use different extensions, change the makefile accordingly. === The Macros -`TARGET` holds the target name. It uses the `?=` assignment operator so -you can pass it a different value from a script, just in case. There are -a bunch of varibales that you can assign on the terminal to replace the -makefile’s defaults. Among those there are some that are first getting a -default value assigned and then get the `?=` assignemnt operator so you -can assign them values from the terminal, e.g: +`+TARGET+` holds the target name. It uses the `+?=+` assignment operator +so you can pass it a different value from a script, just in case. There +are a bunch of varibales that you can assign on the terminal to replace +the makefile’s defaults. Among those there are some that are first +getting a default value assigned and then get the `+?=+` assignemnt +operator so you can assign them values from the terminal, e.g: [source,make] ---- @@ -51,46 +51,46 @@ CC?=clang It looks a bit backwards but there is a reason for that. The reason why we need to do that is because those variables are called -`implicit variables` in gnu make terminology. Implicit variables are +`+implicit variables+` in gnu make terminology. Implicit variables are already defined by your makefile even if you havent defined them so they get some special treatment. In order to assign them values from the -terminal, we first assign them a value and then use the `?=` operator on -them. We don’t really need to assign the default value here again, but I -felt like it would be more expressive to assign the default for a second -time. - -Variables `CC_FLAGS`, `CXX_FLAGS` and `LD_FLAGS` have accompanying -variables, namely `CC_FLAGS_EXTRA`, `CXX_FLAGS_EXTRA` and -`LD_FLAGS_EXTRA`. The extra ones use the `?=` assignment. The scheme is -to have the first set to host the invariant options and use the second -set, to change the options that would need changing between different -builds, if need be. - -The variable `BUILD_MODE` is used for the sanitizer builds of clang. -`ADDSAN` will build the code with the address sanitizer. `MEMSAN` will -build the code with memory sanitizer and `UBSAN` will build the code -with undefined behaviour sanitizers. The build mode will affect all the -other targets, meaning you will get a dynamically-linked executable in -debug mode with address sanitizers if you assign `MEMSAN` to -`BUILD_MODE`. +terminal, we first assign them a value and then use the `+?=+` operator +on them. We don’t really need to assign the default value here again, +but I felt like it would be more expressive to assign the default for a +second time. + +Variables `+CC_FLAGS+`, `+CXX_FLAGS+` and `+LD_FLAGS+` have accompanying +variables, namely `+CC_FLAGS_EXTRA+`, `+CXX_FLAGS_EXTRA+` and +`+LD_FLAGS_EXTRA+`. The extra ones use the `+?=+` assignment. The scheme +is to have the first set to host the invariant options and use the +second set, to change the options that would need changing between +different builds, if need be. + +The variable `+BUILD_MODE+` is used for the sanitizer builds of clang. +`+ADDSAN+` will build the code with the address sanitizer. `+MEMSAN+` +will build the code with memory sanitizer and `+UBSAN+` will build the +code with undefined behaviour sanitizers. The build mode will affect all +the other targets, meaning you will get a dynamically-linked executable +in debug mode with address sanitizers if you assign `+MEMSAN+` to +`+BUILD_MODE+`. === Targets ==== default -The default target is `all`. `all` depends on `TARGET`. +The default target is `+all+`. `+all+` depends on `+TARGET+`. ==== all -`all` is an aggregate target. calling it will build, or rather, try to +`+all+` is an aggregate target. calling it will build, or rather, try to build everything(given your source-code’s sitation, some targets might not make any sense). ==== depend -`depend` depends on `.depend` which is a file generated by the makefile -that holds the header dependencies. This is how we are making the -makefile sensitive to header changes. The file’s contents look like +`+depend+` depends on `+.depend+` which is a file generated by the +makefile that holds the header dependencies. This is how we are making +the makefile sensitive to header changes. The file’s contents look like this: [source,make] @@ -99,18 +99,18 @@ main.c:main.h myfile1.c:myfile1.h myfile2.h ---- -The inclusion directive is prefixed with a `-`. That’s make lingo for -ignore-if-error. My shell prompt has a `make -q` part in it so just -`cd`ing into a folder will generate the `.depend` file for me.Lazy and -Convinient. +The inclusion directive is prefixed with a `+-+`. That’s make lingo for +ignore-if-error. My shell prompt has a `+make -q+` part in it so just +`+cd+`ing into a folder will generate the `+.depend+` file for me.Lazy +and Convinient. ==== Objects For the objects, there are three sets. You have the normal garden -variety objects that end in `.o`. You get the debug enabled objects that -end in `.odbg` and you get the instrumented objectes that are to be used -for coverage that end in `.ocov`. I made the choice of having three -distinct sets of objects since I personally sometimes struggle to +variety objects that end in `+.o+`. You get the debug enabled objects +that end in `+.odbg+` and you get the instrumented objectes that are to +be used for coverage that end in `+.ocov+`. I made the choice of having +three distinct sets of objects since I personally sometimes struggle to remember whether the current objects are normal, debug or coverage. This way, I don’t need to. That’s the makefile’s problem now. @@ -132,10 +132,10 @@ The instrumented-for-coverage executable, dynaimclly-linked. ==== cov -The target generates the coverage report. it depend on `runcov` which -itself, in turn, depends on `$(TARGET)-cov` so if you change `runcov` to -how your executable should run, cov will handle rebuilding the objects -and then running and generating the coverage report. +The target generates the coverage report. it depend on `+runcov+` which +itself, in turn, depends on `+$(TARGET)-cov+` so if you change +`+runcov+` to how your executable should run, cov will handle rebuilding +the objects and then running and generating the coverage report. ==== covrep @@ -156,14 +156,14 @@ Will try to build your target as an archive, i.e. static library. ==== TAGS -Depends on the `tags` target, generates a tags file. The tags file +Depends on the `+tags+` target, generates a tags file. The tags file includes tags from the header files included by your source as well. ==== valgrind -Depends on `$(TARGET)` by default, runs valgrind with -`--leak-check=yes`. You probably need to change this for the makefile to -run your executable correctly. +Depends on `+$(TARGET)+` by default, runs valgrind with +`+--leak-check=yes+`. You probably need to change this for the makefile +to run your executable correctly. ==== format @@ -177,7 +177,7 @@ Builds the target using emscripten and generates a javascript file. ==== clean and deepclean -`clean` cleans almost everything. `deepclean` depends on `clean`. +`+clean+` cleans almost everything. `+deepclean+` depends on `+clean+`. basically a two level scheme so you can have two different sets of clean commands. diff --git a/mds/oneclientforeverything.txt b/mds/oneclientforeverything.txt index dff38b7..587c230 100644 --- a/mds/oneclientforeverything.txt +++ b/mds/oneclientforeverything.txt @@ -28,7 +28,7 @@ the interface you’re getting is not really unified. ==== The web app way -An example that comes to mind for this sort of solution is `rambox` +An example that comes to mind for this sort of solution is `+rambox+` though they are no longer offering a FOSS solution. I’m just mentioning them as an example of what’s being offered out there as a ready-to-use solution. @@ -264,7 +264,7 @@ need a bouncer if you need to have your messages when your client disconnects. ergo has that functionality built-in. Here are some other perks: -* ergo allow you to define a ``private'' IRC network. You do that by +* ergo allow you to define a "`private`" IRC network. You do that by requiring SASL while connecting, so others can’t connect to your instance without having an account * it is under active development diff --git a/mds/securedocker.txt b/mds/securedocker.txt index 62a4796..17bfbbc 100644 --- a/mds/securedocker.txt +++ b/mds/securedocker.txt @@ -94,7 +94,7 @@ application directly control the syscalls that it makes. Gofer handles filesystem access(not /proc) for the application. The application is a regular application. gVisor aims to provide an environment equivalent to Linux 4.4. gvisor presently does not implement every system call, -`/proc` file or `/sys` file. Every sandbox environment gets its own +`+/proc+` file or `+/sys+` file. Every sandbox environment gets its own instance of Sentry. Every container in the sandbox gets its own instance of Gofer. gVisor currently does not support all system calls. You can find the list of supported system calls for amd64 @@ -227,8 +227,8 @@ int main(int argc, char **argv) { } ---- -Building is straightforward. Just remember to link against `libseccomp` -with `-lseccomp`. +Building is straightforward. Just remember to link against +`+libseccomp+` with `+-lseccomp+`. [source,bash] ---- @@ -265,7 +265,7 @@ bwrap --seccomp 9 9<${TEMP_LOG} bash ---- Then we can go and see where the logs end up. On my host, they are -logged under `/var/log/audit/audit.log` and they look like this: +logged under `+/var/log/audit/audit.log+` and they look like this: .... type=SECCOMP msg=audit(1716144132.339:4036728): auid=1000 uid=1000 gid=1000 ses=1 subj=unconfined pid=19633 comm="bash" exe="/usr/bin/bash" sig=0 arch=c000003e syscall=13 compat=0 ip=0x7fa58591298f code=0x7ffc0000AUID="devi" UID="devi" GID="devi" ARCH=x86_64 SYSCALL=rt_sigaction @@ -308,7 +308,7 @@ containers from the host system. As an example let’s look at the script provided below. Here we are creating a new network namespace. The new interface is provided by simply connecting an android phone for USB tethering. Depending on the -situation you have going on and the `udev` naming rules the interface +situation you have going on and the `+udev+` naming rules the interface name will differ but the concept is the same. We are creating a new network namespace for a second internet provider, which in this case, is our android phone. We then use this network namespace to execute @@ -352,9 +352,9 @@ NetworkManager or whatever you have. === SBOM and Provenance Attestation -What is SBOM? NIST defines SBOM as a ``formal record containing the +What is SBOM? NIST defines SBOM as a "`formal record containing the details and supply chain relationships of various components used in -building software.''. It contains details about the components used to +building software.`". It contains details about the components used to create a certain piece of software. SBOM is meant to help mitigate the threat of supply chain attacks(remember xz?). |