Flask -> FastAPI rewrite

This commit is contained in:
Eriks Karls 2023-12-14 18:57:31 +02:00
parent 91e8f7bbf4
commit 3f322e2999
19 changed files with 664 additions and 207 deletions

View File

@ -1,8 +1,30 @@
FROM python:3-alpine as base FROM python:3-alpine as base
WORKDIR /app
COPY requirements.txt .
RUN pip install --compile --no-cache-dir --requirement requirements.txt
COPY . /app
CMD ["gunicorn", "-c", "gunicorn.py"] WORKDIR /app
#CMD python app.py
# Code from https://github.com/nginxinc/docker-nginx/blob/4bf0763f4977fff7e9648add59e0540088f3ca9f/stable/alpine-slim/Dockerfile
ENV NGINX_VERSION 1.25.3
ENV PKG_RELEASE 1
COPY compose/nginx/install.sh /
RUN /bin/sh /install.sh \
&& rm /install.sh
# Python part
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt gunicorn
COPY service service
COPY compose/docker-entrypoint.sh /
COPY compose/nginx/default.conf /etc/nginx/conf.d/
COPY compose/nginx/docker-entrypoint.d /docker-entrypoint.d
COPY assets /usr/share/nginx/html
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 80
STOPSIGNAL SIGTERM
#CMD ["gunicorn", "-c", "/app/service/gunicorn.py"]
CMD ["uvicorn", "service.app:app", "--host", "0.0.0.0", "--port", "5000", "--proxy-headers", "--no-server-header"]

View File

@ -3,3 +3,19 @@ docker-build:
docker-push: docker-build docker-push: docker-build
docker push registry.72.lv/flask-namedays:latest docker push registry.72.lv/flask-namedays:latest
clean:
black service
isort service
flake8 service
find . -name '*.pyc' -exec rm -f {} +
find . -name '*.pyo' -exec rm -f {} +
find . -name '*~' -exec rm -f {} +
find . -name '__pycache__' -exec rm -fr {} +
find . -name '.mypy_cache' -exec rm -fr {} +
find . -name '.pytest_cache' -exec rm -fr {} +
find . -name '.coverage' -exec rm -f {} +
install-dev:
pip install -U pur black isort flake8 pip setuptools wheel
pip install -Ur requirements.txt

View File

@ -1,91 +0,0 @@
# Flask nameday calendar generator
Select names to be included in (Latvian) nameday ics calendar
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://gitlab.com/keriks/flask-namedays.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/user/project/integrations/)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Automatically merge when pipeline succeeds](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://docs.gitlab.com/ee/user/clusters/agent/)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://gitlab.com/-/experiment/new_project_readme_content:9bcf98d8e733be2add2baaf0719bdede?https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.

97
app.py
View File

@ -1,97 +0,0 @@
import datetime
import json
import uuid
from collections import defaultdict
from io import BytesIO
from typing import Iterable, Mapping
from flask import Flask, jsonify, render_template, request, send_file
from icalendar import Alarm, Calendar, Event
from unidecode import unidecode
app = Flask(__name__)
LV_MONTHS = {
1: "jan",
2: "feb",
3: "mar",
4: "apr",
5: "mai",
6: "jūn",
7: "jūl",
8: "aug",
9: "sep",
10: "okt",
11: "nov",
12: "dec",
}
# Source JSON created from http://vvc.gov.lv/export/sites/default/files/paplasinatais_saraksts.pdf
def generate_ical_for_mapping(cal: Mapping[datetime.date, Iterable[str]]) -> BytesIO:
ical = Calendar()
ical["VERSION"] = "2.0"
ical["PRODID"] = "NameDays"
for date, names in sorted(cal.items(), key=lambda x: x[0]):
ev = Event()
ev.add("SUMMARY", ", ".join(sorted(names)))
ev.add("DTSTART", date)
ev.add("DTEND", date + datetime.timedelta(days=1))
ev.add("DTSTAMP", datetime.datetime(2000, 1, 1))
ev.add("RRULE", {"FREQ": "YEARLY"})
ev.add("CATEGORY", "Anniversary")
ev.add("UID", uuid.uuid4())
alert = Alarm()
alert.add("action", "DISPLAY")
alert.add("TRIGGER", datetime.timedelta(hours=9))
alert.add("DESCRIPTION", "Default description")
ev.add_component(alert)
ical.add_component(ev)
return BytesIO(ical.to_ical(True))
@app.route("/", methods=["POST", "GET"])
def calendar():
if request.method == "POST":
with open("vardadienas.json") as f:
vdienas = json.load(f)
cal = defaultdict(list)
for selected_name in request.form.getlist("words"):
month, day, name = selected_name.split("__")
vdmd = vdienas[str(int(month))][str(int(day))]
if name in vdmd["normal"] or name in vdmd["special"]:
date = datetime.date(2000, int(month), int(day))
cal[date].append(name)
if cal:
name = f"{uuid.uuid4().hex}.ics"
f = generate_ical_for_mapping(cal)
return send_file(f, mimetype="text/calendar", as_attachment=True, download_name=name)
return render_template("namedays.html")
@app.route("/search/")
def calendar_search():
term = request.args.get("term")
results = []
if term:
term = unidecode(term.lower(), errors="preserve")
with open("mapping.json") as f:
mapping = json.load(f)
for kind in ["normal", "special"]:
words = {
"text": kind.title(),
"children": [
{
"id": key,
"text": f"{value} ({key.split('__')[1]}. {LV_MONTHS[int(key.split('__')[0])]}.)",
}
for key, value in mapping[kind].items()
if unidecode(value.lower(), errors="preserve").startswith(term)
],
}
if words["children"]:
results.append(words)
return jsonify({"results": results, "pagination": {"more": False}})
if __name__ == "__main__":
app.run("0.0.0.0", 8000, True, False)

View File

@ -15,7 +15,7 @@
<h1>Vārdadienu kalendāra ģenerators</h1> <h1>Vārdadienu kalendāra ģenerators</h1>
</div> </div>
<div class="col-12"> <div class="col-12">
<form method="post"> <form method="post" action="/api/download">
<div class="mb-3"> <div class="mb-3">
<label for="idWordSelect" class="form-label">Atlasi vārdus:</label> <label for="idWordSelect" class="form-label">Atlasi vārdus:</label>
<select name="words" class="form-control js-example-basic-multiple" id="idWordSelect" multiple></select> <select name="words" class="form-control js-example-basic-multiple" id="idWordSelect" multiple></select>
@ -30,7 +30,7 @@
<script src="https://cdn.jsdelivr.net/npm/select2@4.1.0-rc.0/dist/js/select2.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/select2@4.1.0-rc.0/dist/js/select2.min.js"></script>
<script> <script>
$(document).ready(function () { $(document).ready(function () {
$('.js-example-basic-multiple').select2({minimumInputLength: 2, ajax: {url: "/search/", dataType: "json", delay: 500}}); $('.js-example-basic-multiple').select2({minimumInputLength: 2, ajax: {url: "/api/search", dataType: "json", delay: 500}});
}); });
</script> </script>
</body> </body>

47
compose/docker-entrypoint.sh Executable file
View File

@ -0,0 +1,47 @@
#!/bin/sh
# vim:sw=4:ts=4:et
set -e
entrypoint_log() {
if [ -z "${NGINX_ENTRYPOINT_QUIET_LOGS:-}" ]; then
echo "$@"
fi
}
if [ "$1" = "nginx" ] || [ "$1" = "nginx-debug" ]; then
if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -maxdepth 1 -type f -print -quit 2>/dev/null | read v; then
entrypoint_log "$0: /docker-entrypoint.d/ is not empty, will attempt to perform configuration"
entrypoint_log "$0: Looking for shell scripts in /docker-entrypoint.d/"
find "/docker-entrypoint.d/" -follow -type f -print | sort -V | while read -r f; do
case "$f" in
*.envsh)
if [ -x "$f" ]; then
entrypoint_log "$0: Sourcing $f";
. "$f"
else
# warn on shell scripts without exec bit
entrypoint_log "$0: Ignoring $f, not executable";
fi
;;
*.sh)
if [ -x "$f" ]; then
entrypoint_log "$0: Launching $f";
"$f"
else
# warn on shell scripts without exec bit
entrypoint_log "$0: Ignoring $f, not executable";
fi
;;
*) entrypoint_log "$0: Ignoring $f";;
esac
done
entrypoint_log "$0: Configuration complete; ready for start up"
else
entrypoint_log "$0: No files found in /docker-entrypoint.d/, skipping configuration"
fi
fi
nginx -g "daemon on;"
exec "$@"

View File

@ -0,0 +1,40 @@
server {
listen 80;
server_name _;
# display real ip in nginx logs when connected through reverse proxy via docker network
real_ip_header X-Forwarded-For;
real_ip_recursive on;
client_max_body_size 32k;
location /api {
proxy_pass http://localhost:5000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Requested-With $http_x_requested_with;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
client_body_buffer_size 128k;
proxy_connect_timeout 60;
proxy_send_timeout 300;
proxy_read_timeout 300;
proxy_buffers 32 8k;
proxy_request_buffering off;
}
location / {
root /usr/share/nginx/html;
index index.html;
}
location = /favicon.ico {
alias /usr/share/nginx/html/static/favicon.ico;
}
}

View File

@ -0,0 +1,67 @@
#!/bin/sh
# vim:sw=4:ts=4:et
set -e
entrypoint_log() {
if [ -z "${NGINX_ENTRYPOINT_QUIET_LOGS:-}" ]; then
echo "$@"
fi
}
ME=$(basename "$0")
DEFAULT_CONF_FILE="etc/nginx/conf.d/default.conf"
# check if we have ipv6 available
if [ ! -f "/proc/net/if_inet6" ]; then
entrypoint_log "$ME: info: ipv6 not available"
exit 0
fi
if [ ! -f "/$DEFAULT_CONF_FILE" ]; then
entrypoint_log "$ME: info: /$DEFAULT_CONF_FILE is not a file or does not exist"
exit 0
fi
# check if the file can be modified, e.g. not on a r/o filesystem
touch /$DEFAULT_CONF_FILE 2>/dev/null || { entrypoint_log "$ME: info: can not modify /$DEFAULT_CONF_FILE (read-only file system?)"; exit 0; }
# check if the file is already modified, e.g. on a container restart
grep -q "listen \[::]\:80;" /$DEFAULT_CONF_FILE && { entrypoint_log "$ME: info: IPv6 listen already enabled"; exit 0; }
if [ -f "/etc/os-release" ]; then
. /etc/os-release
else
entrypoint_log "$ME: info: can not guess the operating system"
exit 0
fi
entrypoint_log "$ME: info: Getting the checksum of /$DEFAULT_CONF_FILE"
case "$ID" in
"debian")
CHECKSUM=$(dpkg-query --show --showformat='${Conffiles}\n' nginx | grep $DEFAULT_CONF_FILE | cut -d' ' -f 3)
echo "$CHECKSUM /$DEFAULT_CONF_FILE" | md5sum -c - >/dev/null 2>&1 || {
entrypoint_log "$ME: info: /$DEFAULT_CONF_FILE differs from the packaged version"
exit 0
}
;;
"alpine")
CHECKSUM=$(apk manifest nginx 2>/dev/null| grep $DEFAULT_CONF_FILE | cut -d' ' -f 1 | cut -d ':' -f 2)
echo "$CHECKSUM /$DEFAULT_CONF_FILE" | sha1sum -c - >/dev/null 2>&1 || {
entrypoint_log "$ME: info: /$DEFAULT_CONF_FILE differs from the packaged version"
exit 0
}
;;
*)
entrypoint_log "$ME: info: Unsupported distribution"
exit 0
;;
esac
# enable ipv6 on default.conf listen sockets
sed -i -E 's,listen 80;,listen 80;\n listen [::]:80;,' /$DEFAULT_CONF_FILE
entrypoint_log "$ME: info: Enabled listen on IPv6 in /$DEFAULT_CONF_FILE"
exit 0

View File

@ -0,0 +1,12 @@
#!/bin/sh
# vim:sw=2:ts=2:sts=2:et
set -eu
LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[ "${NGINX_ENTRYPOINT_LOCAL_RESOLVERS:-}" ] || return 0
NGINX_LOCAL_RESOLVERS=$(awk 'BEGIN{ORS=" "} $1=="nameserver" {print $2}' /etc/resolv.conf)
export NGINX_LOCAL_RESOLVERS

View File

@ -0,0 +1,78 @@
#!/bin/sh
set -e
ME=$(basename "$0")
entrypoint_log() {
if [ -z "${NGINX_ENTRYPOINT_QUIET_LOGS:-}" ]; then
echo "$@"
fi
}
add_stream_block() {
local conffile="/etc/nginx/nginx.conf"
if grep -q -E "\s*stream\s*\{" "$conffile"; then
entrypoint_log "$ME: $conffile contains a stream block; include $stream_output_dir/*.conf to enable stream templates"
else
# check if the file can be modified, e.g. not on a r/o filesystem
touch "$conffile" 2>/dev/null || { entrypoint_log "$ME: info: can not modify $conffile (read-only file system?)"; exit 0; }
entrypoint_log "$ME: Appending stream block to $conffile to include $stream_output_dir/*.conf"
cat << END >> "$conffile"
# added by "$ME" on "$(date)"
stream {
include $stream_output_dir/*.conf;
}
END
fi
}
auto_envsubst() {
local template_dir="${NGINX_ENVSUBST_TEMPLATE_DIR:-/etc/nginx/templates}"
local suffix="${NGINX_ENVSUBST_TEMPLATE_SUFFIX:-.template}"
local output_dir="${NGINX_ENVSUBST_OUTPUT_DIR:-/etc/nginx/conf.d}"
local stream_suffix="${NGINX_ENVSUBST_STREAM_TEMPLATE_SUFFIX:-.stream-template}"
local stream_output_dir="${NGINX_ENVSUBST_STREAM_OUTPUT_DIR:-/etc/nginx/stream-conf.d}"
local filter="${NGINX_ENVSUBST_FILTER:-}"
local template defined_envs relative_path output_path subdir
defined_envs=$(printf '${%s} ' $(awk "END { for (name in ENVIRON) { print ( name ~ /${filter}/ ) ? name : \"\" } }" < /dev/null ))
[ -d "$template_dir" ] || return 0
if [ ! -w "$output_dir" ]; then
entrypoint_log "$ME: ERROR: $template_dir exists, but $output_dir is not writable"
return 0
fi
find "$template_dir" -follow -type f -name "*$suffix" -print | while read -r template; do
relative_path="${template#"$template_dir/"}"
output_path="$output_dir/${relative_path%"$suffix"}"
subdir=$(dirname "$relative_path")
# create a subdirectory where the template file exists
mkdir -p "$output_dir/$subdir"
entrypoint_log "$ME: Running envsubst on $template to $output_path"
envsubst "$defined_envs" < "$template" > "$output_path"
done
# Print the first file with the stream suffix, this will be false if there are none
if test -n "$(find "$template_dir" -name "*$stream_suffix" -print -quit)"; then
mkdir -p "$stream_output_dir"
if [ ! -w "$stream_output_dir" ]; then
entrypoint_log "$ME: ERROR: $template_dir exists, but $stream_output_dir is not writable"
return 0
fi
add_stream_block
find "$template_dir" -follow -type f -name "*$stream_suffix" -print | while read -r template; do
relative_path="${template#"$template_dir/"}"
output_path="$stream_output_dir/${relative_path%"$stream_suffix"}"
subdir=$(dirname "$relative_path")
# create a subdirectory where the template file exists
mkdir -p "$stream_output_dir/$subdir"
entrypoint_log "$ME: Running envsubst on $template to $output_path"
envsubst "$defined_envs" < "$template" > "$output_path"
done
fi
}
auto_envsubst
exit 0

View File

@ -0,0 +1,188 @@
#!/bin/sh
# vim:sw=2:ts=2:sts=2:et
set -eu
LC_ALL=C
ME=$(basename "$0")
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[ "${NGINX_ENTRYPOINT_WORKER_PROCESSES_AUTOTUNE:-}" ] || exit 0
touch /etc/nginx/nginx.conf 2>/dev/null || { echo >&2 "$ME: error: can not modify /etc/nginx/nginx.conf (read-only file system?)"; exit 0; }
ceildiv() {
num=$1
div=$2
echo $(( (num + div - 1) / div ))
}
get_cpuset() {
cpusetroot=$1
cpusetfile=$2
ncpu=0
[ -f "$cpusetroot/$cpusetfile" ] || return 1
for token in $( tr ',' ' ' < "$cpusetroot/$cpusetfile" ); do
case "$token" in
*-*)
count=$( seq $(echo "$token" | tr '-' ' ') | wc -l )
ncpu=$(( ncpu+count ))
;;
*)
ncpu=$(( ncpu+1 ))
;;
esac
done
echo "$ncpu"
}
get_quota() {
cpuroot=$1
ncpu=0
[ -f "$cpuroot/cpu.cfs_quota_us" ] || return 1
[ -f "$cpuroot/cpu.cfs_period_us" ] || return 1
cfs_quota=$( cat "$cpuroot/cpu.cfs_quota_us" )
cfs_period=$( cat "$cpuroot/cpu.cfs_period_us" )
[ "$cfs_quota" = "-1" ] && return 1
[ "$cfs_period" = "0" ] && return 1
ncpu=$( ceildiv "$cfs_quota" "$cfs_period" )
[ "$ncpu" -gt 0 ] || return 1
echo "$ncpu"
}
get_quota_v2() {
cpuroot=$1
ncpu=0
[ -f "$cpuroot/cpu.max" ] || return 1
cfs_quota=$( cut -d' ' -f 1 < "$cpuroot/cpu.max" )
cfs_period=$( cut -d' ' -f 2 < "$cpuroot/cpu.max" )
[ "$cfs_quota" = "max" ] && return 1
[ "$cfs_period" = "0" ] && return 1
ncpu=$( ceildiv "$cfs_quota" "$cfs_period" )
[ "$ncpu" -gt 0 ] || return 1
echo "$ncpu"
}
get_cgroup_v1_path() {
needle=$1
found=
foundroot=
mountpoint=
[ -r "/proc/self/mountinfo" ] || return 1
[ -r "/proc/self/cgroup" ] || return 1
while IFS= read -r line; do
case "$needle" in
"cpuset")
case "$line" in
*cpuset*)
found=$( echo "$line" | cut -d ' ' -f 4,5 )
break
;;
esac
;;
"cpu")
case "$line" in
*cpuset*)
;;
*cpu,cpuacct*|*cpuacct,cpu|*cpuacct*|*cpu*)
found=$( echo "$line" | cut -d ' ' -f 4,5 )
break
;;
esac
esac
done << __EOF__
$( grep -F -- '- cgroup ' /proc/self/mountinfo )
__EOF__
while IFS= read -r line; do
controller=$( echo "$line" | cut -d: -f 2 )
case "$needle" in
"cpuset")
case "$controller" in
cpuset)
mountpoint=$( echo "$line" | cut -d: -f 3 )
break
;;
esac
;;
"cpu")
case "$controller" in
cpu,cpuacct|cpuacct,cpu|cpuacct|cpu)
mountpoint=$( echo "$line" | cut -d: -f 3 )
break
;;
esac
;;
esac
done << __EOF__
$( grep -F -- 'cpu' /proc/self/cgroup )
__EOF__
case "${found%% *}" in
"/")
foundroot="${found##* }$mountpoint"
;;
"$mountpoint")
foundroot="${found##* }"
;;
esac
echo "$foundroot"
}
get_cgroup_v2_path() {
found=
foundroot=
mountpoint=
[ -r "/proc/self/mountinfo" ] || return 1
[ -r "/proc/self/cgroup" ] || return 1
while IFS= read -r line; do
found=$( echo "$line" | cut -d ' ' -f 4,5 )
done << __EOF__
$( grep -F -- '- cgroup2 ' /proc/self/mountinfo )
__EOF__
while IFS= read -r line; do
mountpoint=$( echo "$line" | cut -d: -f 3 )
done << __EOF__
$( grep -F -- '0::' /proc/self/cgroup )
__EOF__
case "${found%% *}" in
"")
return 1
;;
"/")
foundroot="${found##* }$mountpoint"
;;
"$mountpoint" | /../*)
foundroot="${found##* }"
;;
esac
echo "$foundroot"
}
ncpu_online=$( getconf _NPROCESSORS_ONLN )
ncpu_cpuset=
ncpu_quota=
ncpu_cpuset_v2=
ncpu_quota_v2=
cpuset=$( get_cgroup_v1_path "cpuset" ) && ncpu_cpuset=$( get_cpuset "$cpuset" "cpuset.effective_cpus" ) || ncpu_cpuset=$ncpu_online
cpu=$( get_cgroup_v1_path "cpu" ) && ncpu_quota=$( get_quota "$cpu" ) || ncpu_quota=$ncpu_online
cgroup_v2=$( get_cgroup_v2_path ) && ncpu_cpuset_v2=$( get_cpuset "$cgroup_v2" "cpuset.cpus.effective" ) || ncpu_cpuset_v2=$ncpu_online
cgroup_v2=$( get_cgroup_v2_path ) && ncpu_quota_v2=$( get_quota_v2 "$cgroup_v2" ) || ncpu_quota_v2=$ncpu_online
ncpu=$( printf "%s\n%s\n%s\n%s\n%s\n" \
"$ncpu_online" \
"$ncpu_cpuset" \
"$ncpu_quota" \
"$ncpu_cpuset_v2" \
"$ncpu_quota_v2" \
| sort -n \
| head -n 1 )
sed -i.bak -r 's/^(worker_processes)(.*)$/# Commented out by '"$ME"' on '"$(date)"'\n#\1\2\n\1 '"$ncpu"';/' /etc/nginx/nginx.conf

50
compose/nginx/install.sh Executable file
View File

@ -0,0 +1,50 @@
#!/bin/sh
set -x
# create nginx user/group first, to be consistent throughout docker variants
addgroup -g 101 -S nginx
adduser -S -D -H -u 101 -h /var/cache/nginx -s /sbin/nologin -G nginx -g nginx nginx
nginxPackages="nginx=${NGINX_VERSION}-r${PKG_RELEASE}"
# install prerequisites for public key and pkg-oss checks
apk add --no-cache --virtual .checksum-deps openssl
set -x
KEY_SHA512="e09fa32f0a0eab2b879ccbbc4d0e4fb9751486eedda75e35fac65802cc9faa266425edf83e261137a2f4d16281ce2c1a5f4502930fe75154723da014214f0655"
wget -O /tmp/nginx_signing.rsa.pub https://nginx.org/keys/nginx_signing.rsa.pub
if echo "$KEY_SHA512 */tmp/nginx_signing.rsa.pub" | sha512sum -c -; then \
echo "key verification succeeded!"; \
mv /tmp/nginx_signing.rsa.pub /etc/apk/keys/; \
else \
echo "key verification failed!"; \
exit 1; \
fi
apk add -X "https://nginx.org/packages/mainline/alpine/v$(egrep -o '^[0-9]+\.[0-9]+' /etc/alpine-release)/main" --no-cache $nginxPackages
# remove checksum deps
apk del --no-network .checksum-deps
# if we have leftovers from building, let's purge them (including extra, unnecessary build deps)
if [ -n "$tempDir" ]; then rm -rf "$tempDir"; fi
if [ -f "/etc/apk/keys/abuild-key.rsa.pub" ]; then rm -f /etc/apk/keys/abuild-key.rsa.pub; fi
if [ -f "/etc/apk/keys/nginx_signing.rsa.pub" ]; then rm -f /etc/apk/keys/nginx_signing.rsa.pub; fi
# Bring in gettext so we can get `envsubst`, then throw
# the rest away. To do this, we need to install `gettext`
# then move `envsubst` out of the way so `gettext` can
# be deleted completely, then move `envsubst` back.
apk add --no-cache --virtual .gettext gettext
mv /usr/bin/envsubst /tmp/ \
runDeps="$( \
scanelf --needed --nobanner /tmp/envsubst \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u \
)"
apk add --no-cache $runDeps
apk del --no-network .gettext
mv /tmp/envsubst /usr/local/bin/
# Bring in tzdata so users could set the timezones through the environment
# variables
apk add --no-cache tzdata
# forward request and error logs to docker log collector
ln -sf /dev/stdout /var/log/nginx/access.log
ln -sf /dev/stderr /var/log/nginx/error.log
# create a docker-entrypoint.d directory
mkdir /docker-entrypoint.d

View File

@ -3,9 +3,8 @@ line-length = 120
target-version = ['py311'] target-version = ['py311']
include = '\.pyi?$' include = '\.pyi?$'
extend-exclude = '''( extend-exclude = '''(
migrations/* | .assets/*
| .git/* | venv/*
| media/*
)''' )'''
workers = 4 workers = 4
@ -13,5 +12,5 @@ workers = 4
[tool.isort] [tool.isort]
profile = "black" profile = "black"
line_length = 120 line_length = 120
skip = ["venv", "templates", ".git"] skip = ["venv", "assets", ".git"]
multi_line_output = 3 multi_line_output = 3

View File

@ -1,4 +1,7 @@
Flask==3.0.0 FastAPI==0.105.0
pydantic==2.5.2
uvicorn==0.24.0.post1
python-multipart==0.0.6
icalendar==5.0.11 icalendar==5.0.11
Unidecode==1.3.7 Unidecode==1.3.7
Gunicorn==21.2.0 Gunicorn==21.2.0

0
service/__init__.py Normal file
View File

123
service/app.py Normal file
View File

@ -0,0 +1,123 @@
import datetime
import json
import uuid
from collections import defaultdict
from io import BytesIO
from typing import Annotated
from fastapi import FastAPI, Form, HTTPException, responses
from icalendar import Alarm, Calendar, Event
from pydantic import BaseModel
from unidecode import unidecode
from uvicorn.workers import UvicornWorker
def generate_ical_for_mapping(cal: dict[datetime.date, list[str]]) -> BytesIO:
ical = Calendar()
ical["VERSION"] = "2.0"
ical["PRODID"] = "NameDays"
for date, names in sorted(cal.items(), key=lambda x: x[0]):
ev = Event()
ev.add("SUMMARY", ", ".join(sorted(names)))
ev.add("DTSTART", date)
ev.add("DTEND", date + datetime.timedelta(days=1))
ev.add("DTSTAMP", datetime.datetime(2000, 1, 1))
ev.add("RRULE", {"FREQ": "YEARLY"})
ev.add("CATEGORY", "Anniversary")
ev.add("UID", uuid.uuid4())
alert = Alarm()
alert.add("action", "DISPLAY")
alert.add("TRIGGER", datetime.timedelta(hours=9))
alert.add("DESCRIPTION", "Default description")
ev.add_component(alert)
ical.add_component(ev)
return BytesIO(ical.to_ical(True))
def starts_with(string_to_check: str, check_string: str) -> bool:
value = unidecode(string_to_check.lower(), errors="preserve")
query = unidecode(check_string.lower(), errors="preserve")
return value.startswith(query)
with open("service/mapping.json") as f:
MAPPING = json.load(f)
with open("service/vardadienas.json") as f:
NAMEDAYS = json.load(f)
LV_MONTHS = {
1: "jan",
2: "feb",
3: "mar",
4: "apr",
5: "mai",
6: "jūn",
7: "jūl",
8: "aug",
9: "sep",
10: "okt",
11: "nov",
12: "dec",
}
class SearchResult(BaseModel):
text: str
id: str
class SearchResultSection(BaseModel):
text: str
children: list[SearchResult]
class SearchResponse(BaseModel):
results: list[SearchResultSection]
pagination: dict[str, bool] = {"more": False}
app = FastAPI()
@app.get("/", response_class=responses.HTMLResponse)
async def index_html():
with open("assets/index.html") as f:
return responses.HTMLResponse(f.read(), 201)
@app.get("/api/search")
async def search_words(term: str) -> SearchResponse:
result_map = {}
for section, names in MAPPING.items():
result_map[section] = []
for key, value in names.items():
if starts_with(value, term):
result_map[section].append(
SearchResult(id=key, text=f"{value} ({key.split('__')[1]}. {LV_MONTHS[int(key.split('__')[0])]}.)")
)
return SearchResponse(
results=[
SearchResultSection(text=section.title(), children=results)
for section, results in result_map.items()
if results
]
)
@app.post("/api/download", response_class=responses.StreamingResponse)
async def download_ical(words: Annotated[list[str], Form()]):
cal = defaultdict(list)
for selected_name in words:
month, day, name = selected_name.split("__")
vdmd = NAMEDAYS[str(int(month))][str(int(day))]
if name in vdmd["normal"] or name in vdmd["special"]:
date = datetime.date(2000, int(month), int(day))
cal[date].append(name)
if cal:
return responses.StreamingResponse(
content=generate_ical_for_mapping(cal),
media_type="text/calendar",
headers={"Content-Disposition": f'attachment; filename="{uuid.uuid4().hex}.ics"'},
)
raise HTTPException(404, "No names have been found!")

View File

@ -1,7 +1,7 @@
""" Reference: https://docs.gunicorn.org/en/stable/settings.html """ """ Reference: https://docs.gunicorn.org/en/stable/settings.html """
""" Config File https://docs.gunicorn.org/en/stable/settings.html#config-file """ """ Config File https://docs.gunicorn.org/en/stable/settings.html#config-file """
config = "gunicorn.py" config = "service/gunicorn.py"
wsgi_app = "app:app" wsgi_app = "service.app:app"
""" Debugging https://docs.gunicorn.org/en/stable/settings.html#debugging """ """ Debugging https://docs.gunicorn.org/en/stable/settings.html#debugging """
# reload = False # reload = False
@ -33,7 +33,7 @@ capture_output = True
""" Process Naming https://docs.gunicorn.org/en/stable/settings.html#process-naming """ """ Process Naming https://docs.gunicorn.org/en/stable/settings.html#process-naming """
# proc_name = None # proc_name = None
# default_proc_name = "backoffice" # default_proc_name = ""
""" SSL https://docs.gunicorn.org/en/stable/settings.html#ssl """ """ SSL https://docs.gunicorn.org/en/stable/settings.html#ssl """
# keyfile = None # keyfile = None
@ -81,9 +81,9 @@ bind = "0.0.0.0:5000"
# backlog = 2048 # backlog = 2048
""" Worker Processes https://docs.gunicorn.org/en/stable/settings.html#worker-processes """ """ Worker Processes https://docs.gunicorn.org/en/stable/settings.html#worker-processes """
workers = 2 # workers = 1
# worker_class = "sync" worker_class = "uvicorn.workers.UvicornWorker"
threads = 2 # threads = 2
# worker_connections = 1000 # worker_connections = 1000
# max_requests = 0 # max_requests = 0
# max_requests_jitter = 0 # max_requests_jitter = 0