An Address Opposed to Legalizing Cannabis

Esteemed members of the public, honored policymakers, and delegates, We convene today not to debate personal freedoms, but to confront the stark, quantified realities of a failed policy experiment. Proponents of cannabis legalization ask us to accept a bargain: moderate tax revenue in exchange for unmeasured, manageable risk. Yet, the overwhelming body of epidemiological and clinical evidence demonstrates that this bargain is a profound and costly public health failure. We must shift the conversation from gross revenue to net social cost. The data proves that legalization, as currently implemented, merely exchanges a burden on the criminal justice system for a vastly more expensive, complex, and tragic burden on our healthcare systems, our schools, and the future capacity of our citizens. ...

November 6, 2025 · 10 min · 2075 words · xxraincandyxx

Your Title

📝 LLM Research Journal Entry: [Experiment/Project Title] Metadata Date: [YYYY-MM-DD] Paper: LoCoMo PDF Researcher(s): [Your Name(s)] LLM Model Used: [e.g., GPT-4, Llama 2 7B, Mistral 7B, etc.] Model Version/Fine-Tuning Details: [e.g., Base model, Instruct-tuned, Custom fine-tuning parameters] Task/Goal: [A concise statement of the experiment’s objective] 1. 🎯 Experiment Design & Methodology Describe how the experiment was conducted, including the dataset and evaluation metrics. Hypothesis: [What did you expect to happen or what were you testing?] Dataset/Prompt Set: Source: [e.g., Custom generated, Open-source dataset like SQuAD, HELM] Size: [Number of samples/prompts] Preprocessing: [e.g., Cleaned, standardized, converted to specific format] Key Parameters (Inference Settings): Temperature: [e.g., 0.7, 0.0] Top-P/Top-K: [e.g., 0.9, 50] Max New Tokens: [e.g., 256] Prompting Technique: [e.g., Zero-shot, Few-shot (N=X examples), Chain-of-Thought (CoT)] 2. 📊 Results & Benchmarks Present the quantitative and qualitative findings, focusing on key performance indicators (KPIs). ...

November 1, 2025 · 2 min · 408 words · xxraincandyxx

Injective Transformers Reasoning

Reasoning for the under-hood theory behind injective transformers. Deprecated for Redundant Mathematics 1 — Notation and setup Vocabulary: $ \mathcal{V} $ with $ |\mathcal{V}|=V $. Tokens: a true input sequence $ s = (s_1,\dots,s_T $$, $ s_t\in\mathcal{V} $. Prefix at step ($ t $): $ p_{t} = (s_1,\dots,s_{t-1} $$. Transformer (deterministic $forward mapping from a token sequence to layer-$ \ell $hidden states: $$ \Phi^\ell : \mathcal{V}^T \to \mathbb{R}^{T\times d},\qquad \Phi^\ell(s $= H^\ell = [h^\ell_1,\dots,h^\ell_T]^\top $$ where $ h^\ell_t\in\mathbb{R}^{d} $ is the hidden state at position $ t $ and layer $ \ell $. Observed hidden states (from system/leak): $ \widetilde H^\ell = \Phi^\ell(s $$ (assume exact for theory; later we add noise/quantization). For brevity drop layer superscript when fixed: $ \Phi,; h_t $. Two contrasts we will study for decoding token $ s_t $ given prefix $ p_t $ and observed hidden $ h_t $: ...

October 29, 2025 · 14 min · 2827 words · xxraincandyxx

Backward Propagation Theory

A deep illustration to the Backward Propagation of Deep Neural Networks in a Mathematical way. Setup and notation Consider an L-layer Feedforward Neural Network (FNN/MLP). For layer $l=1,\dots,L$: $n_{l}$ = number of units in layer $l$. Input: $a^{(0)} = x \in \mathbb{R}^{n_0}$. Linear pre-activation: $z^{(l)} = W^{(l)} a^{(l-1)} + b^{(l)}$, where $W^{(l)}\in\mathbb{R}^{n_l\times n_{l-1}}$, $b^{(l)}\in\mathbb{R}^{n_l}$. Activation: $a^{(l)} = \phi^{(l)}(z^{(l)})$ (applied elementwise). Output $a^{(L)}$. Loss for one example: $\mathcal{L} = \mathcal{L}(a^{(L)}, y)$. We want gradients: ...

September 23, 2025 · 5 min · 904 words · xxraincandyxx

Skip-Connection Theory

Skip-Connection A deep illustration to the Skip-Connection in a Mathematical way. 1. Basic Formulation of a Residual Block Without skip connections, a block is just: $$ x_{l+1} = \mathcal{F}(x_l; W_l) $$With skip connections (ResNets): $$ x_{l+1} = x_l + \mathcal{F}(x_l; W_l) $$where: $x_l \in \mathbb{R}^d$ is the input at layer $l$, $\mathcal{F}(x_l; W_l)$ is the residual function (typically a small stack of convolution, normalization, nonlinearity), the skip connection is the identity mapping $I(x) = x$. 2. Gradient Flow: Chain Rule Analysis Consider a loss $\mathcal{L}$. The gradient w.r.t. input $x_l$ is: ...

September 22, 2025 · 3 min · 532 words · xxraincandyxx