Gitignore & MD Handling

This commit is contained in:
zhanshi 2025-03-04 19:49:13 +01:00
parent 10ca71cb98
commit bb7060d84c
4 changed files with 36 additions and 21 deletions

1
.gitignore vendored Normal file
View file

@ -0,0 +1 @@
files/*

View file

@ -19,7 +19,6 @@ WORKDIR /app
COPY --from=builder /app/codex-server . COPY --from=builder /app/codex-server .
COPY --from=builder /app/static /app/static COPY --from=builder /app/static /app/static
COPY --from=builder /app/templates /app/templates COPY --from=builder /app/templates /app/templates
COPY --from=builder /app/files/ /app/files/
# set a non-root user for security # set a non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup RUN addgroup -S appgroup && adduser -S appuser -G appgroup

View file

@ -6,7 +6,7 @@ the internet was once a chaotic, anarchic landscape. anyone with an idea and a l
now? everything is pipes leading into five or six giant silos. if youre not posting on twitter (or its dozen corporate clones), youtube, tiktok, or facebook, you effectively dont exist. gone are the days when the best search result was a forgotten blog post with more insight than a thousand corporate articles combined. now, everything is sanitized, optimized, and locked behind engagement-maximizing algorithms designed to keep you scrolling, not learning. now? everything is pipes leading into five or six giant silos. if youre not posting on twitter (or its dozen corporate clones), youtube, tiktok, or facebook, you effectively dont exist. gone are the days when the best search result was a forgotten blog post with more insight than a thousand corporate articles combined. now, everything is sanitized, optimized, and locked behind engagement-maximizing algorithms designed to keep you scrolling, not learning.
## the enclosure of the digital commons ### the enclosure of the digital commons
it didnt happen overnight, but the internet has been enclosed just like the commons of old. instead of digital land grabs by settlers, we got consolidation by corporations. google, meta, amazon, and a handful of others have reduced the web to a handful of walled gardens where they dictate the rules. the decentralized, federated, and diverse nature of the early internet has been all but erased. it didnt happen overnight, but the internet has been enclosed just like the commons of old. instead of digital land grabs by settlers, we got consolidation by corporations. google, meta, amazon, and a handful of others have reduced the web to a handful of walled gardens where they dictate the rules. the decentralized, federated, and diverse nature of the early internet has been all but erased.
@ -17,13 +17,13 @@ it didnt happen overnight, but the internet has been enclosed just like the c
if a website dies today, its not just a loss—its an extinction. archives break, links rot, and the collective memory of the internet gets shorter by the year. what survives is whats profitable, not whats valuable. if a website dies today, its not just a loss—its an extinction. archives break, links rot, and the collective memory of the internet gets shorter by the year. what survives is whats profitable, not whats valuable.
## digital landlords and algorithmic serfdom #### digital landlords and algorithmic serfdom
users have become tenants, not owners. platforms own your data, your reach, your ability to communicate. build an audience on instagram? meta decides if you can reach them. post a video on youtube? it lives or dies by an algorithm youll never fully understand. even email—once the great decentralized communication method—is being throttled by ai-driven spam filters that quietly kill independent newsletters in favor of corporate mailers. users have become tenants, not owners. platforms own your data, your reach, your ability to communicate. build an audience on instagram? meta decides if you can reach them. post a video on youtube? it lives or dies by an algorithm youll never fully understand. even email—once the great decentralized communication method—is being throttled by ai-driven spam filters that quietly kill independent newsletters in favor of corporate mailers.
meanwhile, everything is monetized in the worst possible way. ads track your every move. paywalls fragment information. and the walled gardens demand not just your content, but your time, your engagement, your obedience to ever-shifting rules. meanwhile, everything is monetized in the worst possible way. ads track your every move. paywalls fragment information. and the walled gardens demand not just your content, but your time, your engagement, your obedience to ever-shifting rules.
## the path forward: rebuilding the indie web ##### the path forward: rebuilding the indie web
its not all doom. people are waking up. self-hosting is making a return. rss is experiencing a quiet renaissance. blogs, newsletters, and independent forums are proving that not everyone wants to live inside corporate-owned cages. its not all doom. people are waking up. self-hosting is making a return. rss is experiencing a quiet renaissance. blogs, newsletters, and independent forums are proving that not everyone wants to live inside corporate-owned cages.
@ -37,4 +37,4 @@ the solution?
most of all: resist the idea that the internet must be owned by corporations. the original spirit of the web—a decentralized, user-driven, chaotic mess—was its greatest strength. the more we cede to platforms, the harder it becomes to reclaim. most of all: resist the idea that the internet must be owned by corporations. the original spirit of the web—a decentralized, user-driven, chaotic mess—was its greatest strength. the more we cede to platforms, the harder it becomes to reclaim.
it's time to break the fences and take back the web. it's time to break the fences and take back the web.
#### fin

27
main.go
View file

@ -26,7 +26,10 @@ const (
var ( var (
headerPattern = regexp.MustCompile(`(?m)^# (.+)$`) headerPattern = regexp.MustCompile(`(?m)^# (.+)$`)
subHeaderPattern = regexp.MustCompile(`(?m)^## (.+)$`) subHeaderPattern = regexp.MustCompile(`(?m)^## (.+)$`)
newlinePattern = regexp.MustCompile(`(?m)^([^#<].+)`) thirdHeaderPattern = regexp.MustCompile(`(?m)^### (.+)$`)
fourthHeaderPattern = regexp.MustCompile(`(?m)^#### (.+)$`)
fifthHeaderPattern = regexp.MustCompile(`(?m)^##### (.+)$`)
sixthHeaderPattern = regexp.MustCompile(`(?m)^###### (.+)$`)
boldPattern = regexp.MustCompile(`\*\*(.*?)\*\*|__(.*?)__`) boldPattern = regexp.MustCompile(`\*\*(.*?)\*\*|__(.*?)__`)
italicPattern = regexp.MustCompile(`\*(.*?)\*|_(.*?)_`) italicPattern = regexp.MustCompile(`\*(.*?)\*|_(.*?)_`)
inlineCodePattern = regexp.MustCompile("`([^`]+)`") inlineCodePattern = regexp.MustCompile("`([^`]+)`")
@ -88,18 +91,30 @@ func readMarkdown(filename string) string {
func renderMarkdown(content string) template.HTML { func renderMarkdown(content string) template.HTML {
content = codeBlockPattern.ReplaceAllString(content, "<pre><code>$1</code></pre>")
content = inlineCodePattern.ReplaceAllString(content, "<code>$1</code>")
content = headerPattern.ReplaceAllString(content, "<h1>$1</h1>") content = headerPattern.ReplaceAllString(content, "<h1>$1</h1>")
content = subHeaderPattern.ReplaceAllString(content, "<h2>$1</h2>") content = subHeaderPattern.ReplaceAllString(content, "<h2>$1</h2>")
content = thirdHeaderPattern.ReplaceAllString(content, "<h3>$1</h3>")
content = fourthHeaderPattern.ReplaceAllString(content, "<h4>$1</h4>")
content = fifthHeaderPattern.ReplaceAllString(content, "<h5>$1</h5>")
content = sixthHeaderPattern.ReplaceAllString(content, "<h6>$1</h6>")
content = boldPattern.ReplaceAllString(content, "<b>$1$2</b>") content = boldPattern.ReplaceAllString(content, "<b>$1$2</b>")
content = italicPattern.ReplaceAllString(content, "<i>$1$2</i>") content = italicPattern.ReplaceAllString(content, "<i>$1$2</i>")
content = inlineCodePattern.ReplaceAllString(content, "<code>$1</code>")
content = codeBlockPattern.ReplaceAllString(content, "<pre><code>$1</code></pre>")
content = blockquotePattern.ReplaceAllString(content, "<blockquote>$1</blockquote>") content = blockquotePattern.ReplaceAllString(content, "<blockquote>$1</blockquote>")
content = linkPattern.ReplaceAllString(content, `<a href="$2">$1</a>`)
// Handle lists properly
content = ulPattern.ReplaceAllString(content, "<li>$1</li>") content = ulPattern.ReplaceAllString(content, "<li>$1</li>")
content = olPattern.ReplaceAllString(content, "<li>$1</li>") content = olPattern.ReplaceAllString(content, "<li>$1</li>")
content = linkPattern.ReplaceAllString(content, `<a href="$2">$1</a>`) content = regexp.MustCompile(`(?m)(<li>.+?</li>)`).ReplaceAllString(content, "<ul>$1</ul>")
content = regexp.MustCompile(`(<li>.+?</li>)`).ReplaceAllString(content, "<ul>$1</ul>") content = regexp.MustCompile(`(?m)(<ul>(?:<li>.+?</li>)+)</ul>`).ReplaceAllString(content, "$1")
content = newlinePattern.ReplaceAllString(content, "<p>$1</p>") content = regexp.MustCompile(`(?m)(<li>.+?</li>)`).ReplaceAllString(content, "<ol>$1</ol>")
content = regexp.MustCompile(`(?m)(<ol>(?:<li>.+?</li>)+)</ol>`).ReplaceAllString(content, "$1")
// Preserve paragraph structure without breaking inline elements
paragraphPattern := regexp.MustCompile(`(?m)(^([^#<\n].+)$)\n?`)
content = paragraphPattern.ReplaceAllString(content, "<p>$1</p>")
return template.HTML(content) return template.HTML(content)
} }