aboutsummaryrefslogtreecommitdiffstats

magni

magni takes a url and upscales all the images inside and returns a simple html page with the images embedded.

usage: magni.py [-h] [--url URL] [--method METHOD] [--port PORT]

options:
  -h, --help            show this help message and exit
  --url URL, -u URL     the url to the page containing the images
  --method METHOD, -m METHOD
                        the method to use. either fsrcnn or espcn
  --port PORT, -p PORT  the port to serve the images over

Install and Run

Install

poetry install

Run

poetry shell && HTTPS_PROXY=socks5h://127.0.0.1:9094 ./magni.py --url https://chapmanganato.com/manga-dt980702/chapter-184

you can obviously use poetry run as well:

HTTPS_PROXY=socks5h://127.0.0.1:9094 poetry run ./magni.py --url https://chapmanganato.com/manga-dt980702/chapter-184

Docker

Build

docker build -t magni:latest --load .

Run

docker run -p 8086:8086 \
  -e HTTPS_PROXY=socks5h://192.168.1.100:9050 \
  -e MAGNI_MODEL_PATH=/opt/magni_models \
  -e MAGNI_IMAGE_PATH=/opt/magni_images \
  -v ./models:/opts/magni_models \
  magni:latest --url https://chapmanganato.com/manga-dt980702/chapter-184

Env Vars

magni recognizes three environment variables:

HTTP_PROXY/HTTPS_PROXY

You can also specify a socks5 proxy here since magni uses pysocks to make the connections.
If the env vars are not defined or are empty magni will not use any proxy.

MAGNI_MODEL_PATH

Path to the directory where magni will store the models.
If the env var is not defined or is empty, magni will use ./models as a default value.

MAGNI_IMAGE_PATH

Path to the directory where magni will store the upscaled images.
If the env var is not defined or is empty magni will use ./images as a default value.

MAGNI_USER_AGENT

The user agent magni will use the download the images.
If the env var is not defined or is empty, magni will use a default user agent you can see below:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36

TODO

  • currently the models we are using are not as effective. I should either fine ones that are specifically trained on greyscale images or just train some myself.