Project Updates June 2023 - March 2024
This note outlines updates from last year until March 2024 in bullet-point format for the following three projects: the proxy server, Kubernetes cluster, and portfolio application. The proxy server and Kubernetes cluster received updates that enhanced security, performance, and manageability. Meanwhile, the portfolio application underwent improvements related to the user experience.
2024-03-23 15:49:37.45901+00
NGINX Proxy
v1.2.0
-
Added Access and Error Logging
-
gzip on; access_log /var/nginx/access.log compression;
- Access logs are compressed in gzip format
-
Logs contain:
- remote address
- local time
- request URL
- status
- body byte size
- response time
- SSL protocol
- SSL cipher
error_log /var/nginx/error.log warn;
-
Added Caching
-
Mainly used for the static portfolio web application
-
proxy_cache_path /var/nginx/cache keys_zone=cache:10m max_size=1g
inactive=60m use_temp_path=off
- max_size: 1 GB
-
inactive: 60 minutes - refresh from source after 60 minutes of
inactive
- use_temp_path: off - reduces IO
-
proxy_cache_key
"$scheme$host$request_uri$body_bytes_sent"
- Unique constraint for a cache item
-
proxy_cache_valid 200 302 5m
-
limit how long cached responses with specific status codes are
considered valid
-
proxy_cache_min_uses 3
-
the number of time the same proxy_cache_key have to appear
before the item being cached
-
proxy_cache_revalidate on
;
-
enables revalidation of expired cache items using conditional
requests with the "If-Modified-Since" and
"If-None-Match" header fields.
-
Following proxy_ignore_headers and
proxy_hide_header fields are specified to prevent NGINX from
disabling caching because of request/response headers.
-
Added Logrotate to access/error logs
-
Changed Domain Registrar to Cloudflare
-
Transferred the diderikk.dev domain from Namecheap to
Cloudflare.
-
The main benefit is the proxy feature Cloudflare provides. All requests
to diderikk.dev now passes through Cloudflare's servers
before reaching the NGINX proxy. In combination with updating the
firewall rules of the proxy server, only
Cloudflare IP addresses
can reach the NGINX proxy.
-
While requests are proxied through Cloudflare, additional features are
applied:
- DDoS protection
- Analytics (Traffic, performance, and security)
- Bad bots protection
- Caching
-
Strict TLS ensures all communication between the NGINX proxy and a
client (browser) uses TLS.
-
Cloudflare also issues signed TLS certificates that last up to 15 years.
Substantially longer expiration compared to Let's Encrypt, which
only issues valid certificates with 90 days expiration.
Portfolio
v1.2.4
- Improved the first page (typed text)
- Removed the cursor jump when the subtitle appears
-
Removed bug for larger screens, where the Projects component disappears
because of animation
-
Caused by the Swiper component's x coordinates being a large
negative number. This rendered the component invisible to the
IntersectorObserver. Therefore, the "show" class is never
appended to the class list when the area intersects with the user's
view. Consequently, the Project component never appeared.
-
Solved by appending the "show" class when the
"Project" HTML header element is intersecting
- Use GitHub README.md files as project descriptions
-
Added a new column in the SQL table containing the URL to the raw README
file
-
Used marked to convert markdown til
HTML
-
Replace GitHub relative file paths with the absolute path (from
"./example" to
https://github/diderikk/Blog/example)
const replaceGitHubImageUrls = async (
markdownText: string,
urlPrefix: string,
rawUrlPrefix: string
): Promise<string> => {
const branch = rawUrlPrefix.includes("master") ? "master" : "main";
let mdTextCopy = markdownText;
const mdImageRegex =
/^!\[[a-zA-z0-9.\-/+ \_]*\]\(([a-zA-z0-9.\-/+ \_\.]*)\)$/gim;
const mdUrlRegex = /\[[a-zA-z0-9.\-/+ \_]*\]\(([a-zA-z0-9.\-/+ \_^.]*)\)/gim;
const mdImages = markdownText.matchAll(mdImageRegex);
const mdUrls = markdownText.matchAll(mdUrlRegex);
const mdImageUrls = Array.from(mdImages);
mdImageUrls.forEach((url) => {
mdTextCopy = mdTextCopy.replaceAll(url[1], rawUrlPrefix + url[1].slice(2));
});
const uniqueUrls: string[] = [];
Array.from(mdUrls).forEach((url) => {
if (!url[1].startsWith("./") && !uniqueUrls.includes(url[1])) {
mdTextCopy = mdTextCopy.replaceAll(
url[1],
`${urlPrefix}/tree/${branch}/${url[1]}`
);
uniqueUrls.push(url[1]);
}
});
return mdTextCopy;
};
1.2.5
- Add logging for each incoming request
- Currently, simple implementation using
console.log
1.2.6
- Included request headers in logging
-
console.log(\`${req.method} ${req.url}
${JSON.stringify(req.headers)}\`);
1.3.1
-
Solved performance issue related to loading of Project card
images
-
Replaced NextJS's
Image tag, which caused the browser to send multiple requests to the proxy to
fetch cached images from the NextJS application.
-
Replaced with HTML's img tag. Even though NextJS no longer optimizes
the images, the browser loads the images noticeably faster.
-
Fewer requests are sent to the NextJS application when loading the page.
- TODO: Issue still persists for Snackbar images
Kubernetes Cluster
Grafana
-
All application logs are forwarded to Grafana using
Loki
-
Removed metrics data to Grafana; now only logs and events are sent to the
Grafana Cloud free instance.
- Added dashboard for home website
Proxy Integration
-
All endpoints moved behind the proxy
- Reason: Utilize all the benefits of the proxy intermediary:
- Access logging
- Caching
- Rate limiting
- Hide Ingress attack surface
-
All web applications now only use the proxy endpoints. This also holds for
WebSocket endpoints endpoints.
-
Changed inbound rules
-
All cluster/node endpoints are only accessible through the
proxy
Ingress
-
Added CORS to the NGINX ingress:
-
nginx.ingress.kubernetes.io/enable-cors: "true"
-
nginx.ingress.kubernetes.io/cors-allow-origin:
"https://diderikk.dev, https://*.diderikk.dev"
- NGINX Ingress ports now only allow traffic from the proxy.
Deployment updates
PgBouncer
-
All applications accessing a database in the cluster now fetch a
connection from the PgBouncer pool
-
Only the PgBouncer needs access to the databases, simplifying network
policies -> all applications must go through the connection pool.
- Simplifies adding new databases since PgBouncer works as a proxy
Database Backup
-
Added CronJob for storing database backup hourly
-
Currently, it stores the backup on a separate node. Not ideal ->
should be moved to a remote storage. Ideas:
-
Create a separate PV connected to a
cloud CSI
(Prioritized)
-
Write an API that can handle file uploads for storing in a cloud
storage -> storage in an S3 bucket
Jaeger
- Future applications should emit traces
-
Jaeger has been tested on a home-lab instance -> easy implementation.
Misc
Literature
- Hacking Kubernetes
- Kubernetes In Action
- Learning eBPF
- Security Observability with eBPF