# Jorbox — full content Last updated: 2026-05-16 > Jorbox LLC is an independent product company in Albuquerque, New Mexico. Founded in 2012 by Ahmad Tayyem. Operates a portfolio of five SaaS brands (QRLynx, Zawjni, Menujo, Lebseh, CVPoet) end-to-end — design, code, infrastructure, and growth handled in-house. No outside investment. Customers reached in 200+ countries. Alongside the portfolio Jorbox takes on a small number of carefully-chosen client projects in four areas — web development, hosting, domain registration, SEO + GEO audits — where its existing stack already runs. Wikidata: Q138655727 · LinkedIn: https://www.linkedin.com/company/jorbox · Crunchbase: https://www.crunchbase.com/organization/jorbox --- # Home — Five brands. One company behind them. Jorbox is an independent product company in Albuquerque, New Mexico. We build, host, and operate every product on this page in-house — design, code, infrastructure, growth. Independent · founder-led · since 2012. ## Quick facts - Founded: 2012 (New Mexico, United States) - Years operating: 14+ (independent, founder-led, no VC) - Active products: 5 (QRLynx, Zawjni, Menujo, Lebseh, CVPoet) - Countries reached: 200+ (across our product platforms) ## How we work — built and run under one roof There is no agency, no outsourced ops team, no white-label vendor stack. Every product on the brands page is engineered, hosted, and marketed by the same people you would email about it. - **Product engineering** — We design, build, and maintain every product in-house — from the database schema to the marketing site. - **Infrastructure** — Hosting, DNS, email, and CDN are all run on infrastructure we own and operate. Same stack across every brand. - **Growth** — SEO, content, and paid acquisition handled internally. The teams who write the code also watch the analytics. - **Long horizons** — We are not optimizing for an exit. Every product is built to keep running for the next decade. ## Our approach We are not the biggest. We are not chasing an exit. We are a company building things we want to keep running. Most companies our age have been acquired, pivoted, or quietly shut down. We have avoided that by keeping the team focused, the margins reasonable, and the work interesting enough that nobody wants to leave. The flip side: we do not ship products to chase trends. Every brand we own solves a specific problem we cared enough about to maintain for a decade. That's the bar. What that looks like: - Products are designed to outlive their launch quarter. - Pricing is built to be sustainable, not to win a vanity number. - Support replies come from the same people who wrote the code. - Roadmaps are public when they affect customers. --- # About — A small, stubborn company. Fourteen years in. We started in 2012 with a simple rule: do honest work, answer your own emails, take no outside money. The rule still holds. ## The story, briefly Jorbox LLC started small in 2012, grew the boring way, and now runs a portfolio of five products end-to-end. We are headquartered in Albuquerque, New Mexico. Our products have reached customers in over two hundred countries. There is no investor on the cap table, no acquisition story to tell, and no PR firm to credit. The team is structured around generalists who care about the entire stack — design, code, hosting, growth — because every product on our brands page needs all four to keep running. As of 2026 the portfolio is the focus — keeping each brand sharp, the infrastructure quiet, and the support inboxes empty. We still take a small number of carefully-chosen client projects (web development, hosting, domains, SEO + GEO) where our existing stack already runs and our standards already apply. ## By the numbers - Founded: 2012 - Years operating: 14+ - Active products: 5 - Countries reached: 200+ - VC raised: none ## A short history - 2012 — Jorbox founded in Albuquerque, New Mexico. - 2015 — Built and rolled out our internal hosting platform. - 2017 — Launched Zawjni — the first product still running today. - 2019 — Brought all marketing in-house: SEO, paid acquisition, content, analytics. - 2024 — Formed Jorbox LLC. Reached customers across 200+ countries through our products. - 2025 — Shipped QRLynx; opened the Lebseh and CVPoet betas. - 2026 — Refocused around the brand portfolio. Client work limited to four core services where our stack already runs. ## How we operate — four rules we actually follow 1. **Build things to last.** Every brand we ship is meant to keep running for the next decade. We measure success in how long a product stays online and useful, not how fast it grows. 2. **No outside money.** No VCs, no acquirers, no exit timeline pressuring decisions. We grow on revenue from the products, which means the products have to actually work. 3. **Own the whole stack.** Hosting, design, code, marketing, support — under one roof. When something needs fixing, the person fixing it usually wrote it. 4. **Plain language.** If we cannot say it plainly, we do not understand it well enough. You should never feel dumb asking a question. ## A note from the founder We started in 2012. Fourteen years later, we are a company that knows what it is for — same operating team, more focus. Now we build for ourselves. For most of our existence, we split our time between client work and shipping our own products. Both ways were fine. Neither was great. The client work paid the bills; the products paid us back in compounding learning. In 2026 we narrowed the client side to four services where our own stack already runs — web development, hosting, domains, SEO + GEO — and put most of our time into the portfolio. Five active brands — QRLynx, Zawjni, Menujo, Lebseh, CVPoet — that we own, operate, and intend to keep running for the next decade. — Ahmad Tayyem, Founder, Jorbox LLC ## Legal - Legal entity: Jorbox LLC (New Mexico, United States) - Address: 1209 Mountain Road Pl NE, Ste N, Albuquerque, NM 87110, USA - Phone: +1 (505) 234-5886 - Email: contact@jorbox.com - Not us: Jorbox Ltd (UK, Companies House 14854246, dissolved 2026-02-17) was a separate, short-lived UK entity — not the same legal entity as Jorbox LLC and never the operating company. The operating company is, and has always been, Jorbox LLC of Albuquerque, New Mexico. --- # Brands — Five products we actually use ## 01 — QRLynx (https://qrlynx.com) Category: SaaS · QR generator. Live since 2024. Stats: 200+ countries, 47 code types. Dynamic QR codes with real analytics. Free QR code generator with 47 code types, real-time analytics, AI insights, and codes that never expire — no watermarks, no scan caps. Read more on jorbox.com: https://www.jorbox.com/brands/qrlynx. ## 02 — Zawjni (https://zawjni.com) Category: Marketplace · Social. Live since 2017. Stats: 33K+ members, AR/EN languages. Marriage matchmaking for the Arab world. Arabic-first matrimonial platform connecting compatible partners with serious intentions. Magic-link auth, verified profiles, and culturally-aware matching. Read more on jorbox.com: https://www.jorbox.com/brands/zawjni. ## 03 — Menujo (https://menujo.com) Category: SaaS · Restaurants. Live since 2024. Stats: 5 min to launch, ∞ menu edits. QR menus that update instantly. Contactless menus for restaurants, cafés, and bars. Live in 5 minutes, edit anything in seconds, never reprint a menu again. Read more on jorbox.com: https://www.jorbox.com/brands/menujo. ## 04 — Lebseh (https://lebseh.com) Category: E-commerce · Apparel. Beta (2025). Stats: POD fulfillment, AR/EN languages. Custom print-on-demand clothing. Designs printed on tees, hoodies, and accessories the moment you order. No inventory, no minimums, ships worldwide — built for the Arab creator economy. Read more on jorbox.com: https://www.jorbox.com/brands/lebseh. ## 05 — CVPoet (https://cvpoet.com) Category: SaaS · Career tools. Coming 2025. Stats: AI tailoring, ATS optimized. Resumes that actually get read. AI-assisted resume builder that turns boring job histories into sharp, ATS-friendly stories. Beautiful templates, instant tailoring per role. Read more on jorbox.com: https://www.jorbox.com/brands/cvpoet. --- # Services — Four services. One company. Web development, hosting, domain registration, SEO + GEO audits. The same infrastructure and toolchain we use for our five own SaaS products — offered to a limited number of carefully-chosen clients. ## 01 — Web Development (https://www.jorbox.com/services/web-development) Eyebrow: Service 01 · Since 2012. Key numbers: 2012 Operating since, 5 Own SaaS products, <50ms Typical TTFB on edge, 275+ Edge locations worldwide. Custom web apps + marketing sites on a modern edge stack. Jorbox provides custom web development services on a modern edge-first stack — SvelteKit (Svelte 5), Cloudflare Pages, Workers, D1 (edge SQLite), R2 (object storage), and KV (key-value cache). We build production-grade web applications, marketing sites, dashboards, and internal tools. Every line of stack we deploy for clients is a stack we run for our own five SaaS products (QRLynx, Zawjni, Menujo, Lebseh, CVPoet) — so we have direct production experience with everything we ship. What's included: SvelteKit 2 / Svelte 5 (Runes mode) or Next.js if your stack mandates it; Tailwind v4 with design tokens — no hardcoded colors; Fully responsive, mobile-first design; Dark + light mode by default; Accessible — WCAG 2.1 AA target, skip links, semantic HTML; View Transitions API where supported, graceful fallback elsewhere; Image optimization via Cloudflare Images or build-time WebP/AVIF; Cloudflare Pages for static + SSR. How we work: Discovery call (60 min, free) → Scoped proposal → Design + architecture review → Build in public branches → SEO + GEO + performance audit before launch → Launch + handoff. ## 02 — Web Hosting (https://www.jorbox.com/services/web-hosting) Eyebrow: Service 02 · Live since 2012. Key numbers: 14+ Years hosting, 99.9% Uptime SLA, 24h Off-server backups, Free SSL via Cloudflare. Managed cPanel hosting, fronted by Cloudflare. Jorbox provides managed cPanel web hosting fronted by the Cloudflare global CDN. Standard cPanel features (file manager, MySQL, email accounts, FTP, cron jobs, PHP version switching) with the operational care of an in-house team — daily off-server backups, proactive security patching, free SSL via Cloudflare, and unmetered CDN bandwidth. Operating from a German Hetzner datacenter since 2020, on infrastructure we have run our own SaaS products on for years. What's included: cPanel control panel with full file manager + MySQL + email; Multiple PHP versions (7.4 through 8.4); Node.js + Python available on request; Cron jobs; Subdomains, addon domains, parked domains; SSH access on Business plan and above; Daily off-server backups, 7 daily + 4 weekly + 12 monthly retention; Cloudflare CDN by default — unmetered, global. How we work: Pick a plan → Provision the account → Migrate (if migrating) → Point DNS → Cloudflare goes in front → Run. ## 03 — Domain Registration (https://www.jorbox.com/services/domain-registration) Eyebrow: Service 03 · 500+ TLDs. Key numbers: 500+ TLDs supported, Free WHOIS privacy, 60s Typical activation, 14+ Years as a registrar. Register or transfer any major TLD — at honest prices. Jorbox provides domain registration and transfer services across 500+ top-level domains, including .com, .net, .org, .io, .co, .me, .us, and most country-code TLDs. Powered by the ResellerClub API (the same infrastructure we use for our own brand domains), with free WHOIS privacy on most TLDs, easy bidirectional transfers, and Cloudflare DNS management included. What's included: Free WHOIS privacy (where TLD allows); Free auto-renewal (toggleable); Free EPP authorization code on request; Free 30-day renewal grace period after expiry; Free transfer-lock toggle in client portal; Free outbound transfer support; Cloudflare DNS auto-wired; A, AAAA, CNAME, MX, TXT, SRV, CAA records — all standard types. How we work: Check availability → Register → Or — transfer in → Manage DNS → Renew (or let it lapse) → Transfer out (if you ever want to). ## 04 — SEO & GEO (https://www.jorbox.com/services/seo-geo) Eyebrow: Service 04 · GEO-first. Key numbers: 6 Audit categories, 100 Point GEO score, 5+ AI engines tracked, 20+ AI crawlers handled. Technical SEO + AI search visibility — audit and ship. Jorbox provides Search Engine Optimization (SEO) and Generative Engine Optimization (GEO) services — auditing and improving how websites perform in both classic search engines (Google, Bing, DuckDuckGo) and AI search engines (ChatGPT, Claude, Perplexity, Google AI Overviews, Bing Copilot). We audit AND implement — the same toolkit we used to bring jorbox.com from a 76/100 GEO score on day-of-launch to its current 77+ with a clear path to 84. What's included: Passage-level citability scoring (will AI engines quote this paragraph?); Answer-block quality (definitional paragraphs near the top of every page); AI crawler robots.txt allow-list (GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, Bingbot, etc.); llms.txt and llms-full.txt generation; Speakable schema for voice search; Live test queries across ChatGPT, Claude, Perplexity, Gemini, AI Overviews, Bing Copilot; Wikidata entity audit; LinkedIn company page completeness. How we work: Discovery (free, 30 min) → Full audit (1-2 weeks) → Implementation plan → Ship the changes (2-6 weeks) → Submit + monitor → Re-audit and lock score. --- # Contact - Email: contact@jorbox.com (single inbox — partnerships, press, customer support, sales all route here) - Phone: +1 (505) 234-5886 - Hours: Mon–Fri · 9am–6pm MT - Address: 1209 Mountain Road Pl NE, Ste N, Albuquerque, NM 87110, USA - LinkedIn: https://www.linkedin.com/company/jorbox We reply to every email. Most things resolve in one exchange. --- # Blog — Things we figured out, written down Performance, strategy, and the occasional rant. Written by the people doing the work — never a content team. ## GEO audit checklist: 20 things to ship on a new marketing site (2026) URL: https://www.jorbox.com/blog/geo-audit-checklist-20-things-to-ship Summary: A practical 20-item checklist for AI-search visibility in 2026 — llms.txt, schema graph, markdown content negotiation, IndexNow, /pledge, /handbook, named authors, founder hub, EU Article 4. The same checklist we ran on jorbox.com. Generative engine optimization — GEO — is the discipline of getting AI search engines like ChatGPT, Perplexity, and Google AI Overviews to cite your site by name when they answer questions about your industry. Traditional SEO gets your page indexed. GEO gets your page quoted. This checklist is the 20-item playbook for a marketing site shipping in 2026, ordered from highest leverage to lowest. We ran every item on this list against jorbox.com itself — the audit is the work. > WHO THIS IS FOR: Indie founders, product companies, and marketing teams shipping a new site (or rebuilding an old one) in 2026. If your annual search traffic comes from Google alone, this checklist will feel like overkill. If a meaningful share comes from ChatGPT search (500M+ weekly active users in 2025) or Perplexity or AI Overviews, every item below moves a needle. ## The headline numbers Three numbers frame why GEO is no longer optional. Google AI Overviews now reach 1.5 billion users per month across 200+ countries, per Google's own product update. AI-referred sessions grew 527% from January to May 2025 (SparkToro). And AI traffic converts 4.4× higher than traditional organic across measured industries. The directional move is clear: a marketing site optimized only for the blue-link Google of 2015 is forfeiting the fastest-growing acquisition channel of 2026. Core Web Vitals work still matters; it's just no longer sufficient. ## How AI engines decide what to cite Search engines use a query-to-document relevance model. AI engines use that plus a citation-confidence model — they need to be confident a passage says something a human would attribute correctly. In practice this means three signals matter disproportionately. One: structured data (schema.org JSON-LD) that ties the page to a named entity. Two: author and freshness anchors (a named human, a recent date). Three: cross-confirmation between your site and authoritative third parties (Wikipedia, LinkedIn, brand directories). The checklist below maps to those three signals plus the infrastructure that surfaces them. ## Traditional SEO vs GEO at a glance A side-by-side of the surfaces each discipline cares about. Most of the items only become meaningful as the AI-search column grows in importance — but the cost of shipping them on a new site is low enough that you should do both. SurfaceMatters for traditional SEOMatters for GEO robots.txtBlock / allow indexingBlock / allow individual AI crawlers (GPTBot, ClaudeBot, PerplexityBot) sitemap.xmlCrawl coverageSame role, plus passes a freshness signal AI engines weight llms.txtNot usedSite-index manifest specifically for AI ingestion Schema.org JSON-LDRich result eligibility (FAQPage, HowTo, Article)Entity grounding — AI engines use @id graphs to disambiguate Markdown content negotiationNot usedAI crawlers tokenize markdown 30% smaller, parse cleaner Named author bylinesE-A-T signal for YMYL pagesRequired for citation — AI engines refuse to attribute to "Team" Wikipedia articleBacklink + brand mentionSingle highest-weight entity-resolution source Bidirectional cross-linksInternal link equityTopic-cluster confirmation IndexNow integrationFaster Bing indexingBing feeds ChatGPT search — minutes to first citation EU Article 4 / Content-SignalsNot usedExplicit AI-permission declaration, emerging legal standard ## Foundations (items 1–6) — the things every site must ship These six are non-negotiable. Every well-ranked indie SaaS in 2026 has them. Most take an hour or less each. 1. A robots.txt that explicitly allows every named AI crawler. The wildcard User-agent: * covers most crawlers in theory, but several bots only read their own named section. Allow GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, ChatGPT-User, Google-Extended, Applebot-Extended, Amazonbot, Bytespider, Meta-ExternalAgent, and DuckAssistBot explicitly. Disallow only your admin and API surfaces. Bonus: add the Cloudflare Content-Signals framework with an EU Article 4 reservation block (lift it from Fastmail's robots.txt). 2. A sitemap that includes every public URL with recent lastmod dates. Generate it dynamically from your CMS so new posts appear without a rebuild. AI engines re-crawl sitemaps more aggressively than HTML pages because the bandwidth cost is low. Set Cache-Control: public, s-maxage=3600 on the response so CDN edges serve it fast. 3. An llms.txt at /llms.txt. This is the AI-equivalent of a sitemap — a single Markdown-formatted index of your most important URLs with one-line descriptions of each. The llms.txt convention is supported by Perplexity, Claude, and a growing set of AI agents. Of 20 indie SaaS peers we audited, only 4 ship one — meaning shipping a thoughtful llms.txt puts you in the top 20th percentile immediately. Pair it with an llms-full.txt that concatenates the full text of your key pages. 4. JSON-LD schema.org markup with stable @id cross-references. The minimum graph: Organization (or Corporation) defined once with @id like https://example.com/#corporation; Person for every named author; WebSite with a SearchAction potentialAction; BreadcrumbList on every page; and content-type-specific schemas (BlogPosting, Article, HowTo, FAQPage, SoftwareApplication) where appropriate. Stable @id values let AI engines walk the entity graph; without them every page reads as an isolated document. 5. Server-rendered HTML. If your homepage needs JavaScript to display its H1, you have a problem. OAI-SearchBot does not execute JavaScript; neither does most of Claude's crawler stack. Prerender (SSG) or server-render (SSR) every public page. If you must use a client-only framework, ship a static prerendered version for crawlers. 6. Canonical URL hygiene. Every page emits a pointing at the canonical version (not the campaign-tagged or query-stringed variant). Preview deployments on subdomains emit . Apex and www variants 301 to one of them. AI engines split citation credit when canonical signals conflict, so consistency compounds. ## Citation surface (items 7–13) — the things that get you quoted Foundations get you indexed. These seven get you cited. The pattern: every page needs a clean, citation-shaped lede; named authors; a freshness anchor; and structured data that ties the content to a real entity. 7. A citation-shaped first paragraph on every page. Under 80 words, declarative, with the primary entity in the subject of the first sentence. AI engines preferentially lift first paragraphs into answers. Bury your value prop on page two if you must, but the first paragraph is the citation slot — don't waste it on a metaphor or a question. Featured snippet research shows the same pattern: pages whose first 100 words directly answer the query rank higher. 8. Named human bylines on every post. Eight of ten peer companies we surveyed (Plausible, Fathom, Buttondown, Beehiiv, Tinybird, PostHog, Tailscale, Proton) byline every post with a named human + photo. The two that don't (Tuta, Nomads) have other forms of authority. Google's 2022 E-E-A-T update added a fourth E for Experience precisely because anonymous content earns less trust. AI engines refuse to attribute to "the team" — they need a person. 9. A /pledge URL or equivalent quotable manifesto. One page, ~150 words, stating what your company actually believes — written in declarative sentences with zero marketing chrome. Posthaven's pledge is the canonical example ("We'll never get acquired. We'll never shut down."). When AI engines surface your company in answers, they preferentially quote pledge-shaped content because the prose is direct and the attribution is unambiguous. 10. A /handbook URL covering company story, principles, and operating model. Only 2 of 20 indie peers ship one (Kit, PostHog). The bar is sparse, the signal is loud. AI engines disproportionately cite handbook content because it's structured, persistent, named, and dated. The handbook does not need to be hundreds of pages — four to six chapters covering the company's history, what it stands for, how it works, and who runs it is sufficient. Jorbox's own handbook at /handbook took one afternoon to draft. 11. FAQPage schema on every page with substantive Q&A content. Wrap your FAQ in FAQPage JSON-LD; each Q in a Question node, each A in an acceptedAnswer. AI Overviews lift FAQPage answers into "People also ask"-style enrichment. Pages without it leave that enrichment slot to a competitor. Google's FAQPage spec is documented; most CMS platforms can auto-emit from typed FAQ blocks. 12. HowTo schema on instructional content. Same pattern as FAQ. If your page describes a sequential procedure (and it should, for any "how to" query), emit HowTo JSON-LD with numbered HowToStep entries. Schema.org's HowTo type is well-supported; the auto-emit pattern from typed CMS blocks is one of the highest-leverage CMS features you can add. AI engines lift entire HowTo step sequences into answers. 13. SpeakableSpecification on every page worth reading aloud. A two-line addition to your schema that nominates which CSS selectors are appropriate for voice-result lift. Google Assistant uses this; AI voice answers use it. Most sites skip it. Speakable spec documented. ## Distribution (items 14–17) — getting the AI engines to your content faster Items 14–17 are about indexation speed and discoverability. The Bing-feeds-ChatGPT path is the biggest leverage point here — most indie sites still miss it. 14. IndexNow integration. When you publish a new page, ping api.indexnow.org with the URL. Bing typically indexes IndexNow-submitted URLs within minutes vs hours/days for organic discovery. Crucially, Bing's index feeds ChatGPT search — so IndexNow is the difference between "ChatGPT can cite this article today" and "next week." Yandex and Naver also consume the IndexNow feed. 15. Markdown content negotiation. When a request includes Accept: text/markdown, serve the same content as clean markdown instead of HTML. AI crawlers parse markdown more reliably (cleaner heading hierarchy, no chrome to strip, ~30% smaller token footprint). PostHog and Beehiiv both document this pattern in their llms.txt files. A Next.js hook of 30 lines implements it. 16. RSS feed at /rss.xml. Perplexity treats RSS as a discovery signal for fresh content. RSS is also still the canonical way for AI agents to subscribe to your content updates. Emit RSS 2.0 with dc:creator, category tags, and enclosure elements for hero images. Most CMS platforms auto-emit; if yours doesn't, it's a 30-line endpoint. 17. A /humans.txt and /.well-known/security.txt. Minor signals individually, both treated as positive trust markers by Bing/Yandex. Five-minute additions. The humans.txt names the team behind the site; the security.txt (per RFC 9116) names the security contact + canonical URL + policy reference. ## Brand authority (items 18–20) — the long arc The last three items are slow but compounding. They are also where most indie sites underinvest, because the payoff is invisible for the first six months. Ship them anyway. Cross-confirmation between your site and external sources is — per Ahrefs' December 2025 research — about three times more strongly correlated with AI citation than traditional backlinks. 18. A Wikipedia article (or a credible path to one). Eleven of twenty indie peers we surveyed have well-maintained Wikipedia articles. The bar is not "you must be famous" — Buttondown, Posthaven, and Fathom all rank well in AI search without Wikipedia entries. But Wikipedia is the single highest-weight source for Perplexity and Google Knowledge Graph entity resolution. The path is editorial: secure 2–3 independent press mentions, ensure your Wikidata entity exists (Q-IDs are free to create), then submit through Articles for Creation. Self-publishing gets reverted. 19. A founder personal hub. Levels has levels.io, Justin Duke has jmduke.com, Maciej Cegłowski has idlewords.com, Jason Fried has world.hey.com. The pattern: the founder runs a personal site at their own name, links to the company's blog content, doubles as a second E-E-A-T anchor for AI engines doing entity resolution on the founder name. rel="me" links between the personal site and the company author page verify the connection. AI engines weight the cross-link. 20. Comparison pages on each product site. /vs/competitor or /alternatives/competitor URLs on every product brand site. Buttondown ships 47 such pages. PostHog ships dozens. The reason: AI engines preferentially cite comparison content when answering "X vs Y" queries. The page format is consistent — verdict + feature table + pricing + use cases + FAQ — and each is ~1,500 words. Important caveat: these belong on the product site (qrlynx.com hosts QRLynx-vs-X), not the parent company site. Generic-company comparison pages don't convert because nobody types "Jorbox vs 37signals" — they type "QRLynx vs Bitly". ## Five common mistakes that hurt GEO After running this checklist against twenty indie sites, the same five mistakes appear over and over. One: blocking AI crawlers reflexively in robots.txt. Some sites have Disallow: / for GPTBot, ClaudeBot, etc. left over from 2024 panic about AI training. Unless you have a specific copyright reason to do so (Fastmail, for example, opts out of ai-train for privacy reasons), you are blocking your own visibility. Allow the crawlers; that's how you get cited. Two: emitting JSON-LD as inline strings instead of one canonical graph with @id refs. Re-inlining the same Organization schema on every page wastes bytes and confuses validators that don't walk the @id graph across pages. Define entities once at the layout level; reference by @id everywhere else. Three: omitting freshness signals. AI engines weight dateModified and visible "Last updated" dates heavily. Many sites publish in 2022 and never refresh. Quarterly refresh of top posts is a real ranking signal. Four: relying on JavaScript for primary content rendering. If your H1 only appears after hydration, half the AI crawler stack will never see it. Five: writing under "the team" or "the company" instead of a named human. AI engines won't cite anonymous content into a direct-attribution answer. ## Related reading on Jorbox If this checklist was useful, three earlier Jorbox posts cover adjacent ground: Core Web Vitals in 2026: which one actually moves the needle on the performance side, Should you migrate off WordPress? A reality check on the platform-decision side, and Why we still answer our own support tickets in 2026 on the operations side. The Jorbox handbook covers the company's operating model in more depth, and the pledge is the one-page summary of the four operating rules. Q: What is GEO (generative engine optimization)? A: GEO is the discipline of optimizing your website so that AI search engines — ChatGPT, Perplexity, Google AI Overviews, Bing Copilot, Claude — cite your site by name when they answer questions. Traditional SEO gets you indexed; GEO gets you quoted. The two disciplines overlap on foundations (robots.txt, sitemap, schema) but diverge sharply on content shape, citation surface, and entity-resolution signals. Q: How is GEO different from traditional SEO? A: Traditional SEO optimizes for ranking on Google's blue-link search results. GEO optimizes for being cited inside AI-generated answers. The most visible practical differences: GEO weights named human bylines (AI refuses to attribute to "the team"), llms.txt files (AI-specific site indexes), markdown content negotiation (AI parses markdown better than HTML), and Wikipedia presence (the highest-weight entity-resolution source). Traditional SEO largely ignores all of these. Q: Do I need to block AI crawlers from training on my content? A: Almost certainly not, and doing so usually hurts more than it helps. If your business model depends on being discovered through AI search (most B2B and consumer-SaaS marketing sites), blocking GPTBot or ClaudeBot blocks your own visibility. The Cloudflare Content-Signals framework lets you declare granular permissions — for example, allowing search and ai-input while blocking ai-train — but most marketing sites should simply allow all three. Q: How long does it take to see GEO results? A: Foundation items (robots.txt, sitemap, schema, llms.txt) start affecting AI citations within days to weeks because AI crawlers re-fetch frequently. Brand authority items (Wikipedia, comparison pages, founder hub) compound over 3–12 months. The IndexNow integration specifically can move "first cited by ChatGPT" from weeks to under an hour for a new blog post. Q: What is llms.txt and why does it matter? A: llms.txt is an AI-specific site index — a Markdown-formatted file at /llms.txt that lists your most important URLs with one-line descriptions. Perplexity, Claude, and a growing set of AI agents fetch it as a faster alternative to crawling your sitemap. Of 20 indie SaaS peers we audited, only 4 shipped one — meaning a thoughtful llms.txt puts you in the top 20th percentile of AI-discoverability today. Q: Does schema.org structured data still matter for GEO? A: Yes — arguably more than for traditional SEO. AI engines use schema.org JSON-LD for entity resolution: when ChatGPT decides whether a page about "Jorbox" is referring to your company or to an unrelated brand with a similar name, it walks the @id graph. Without schema, the page is a bag of words. With it, the page is anchored to a named entity AI engines can disambiguate. The minimum to ship: Organization, Person, WebSite, BreadcrumbList, plus content-type schemas (BlogPosting, FAQPage, HowTo, SoftwareApplication) where appropriate. Q: Do AI engines actually read robots.txt? A: Yes. GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, Google-Extended, and Applebot-Extended all document that they read and respect robots.txt directives. Some — notably Bytespider and certain training crawlers — have historically been less consistent, but the major AI search and grounding crawlers follow the standard. Q: How do I get a Wikipedia article for my brand? A: You don't write it yourself — Wikipedia's notability policy requires multiple independent reliable-source citations, and self-published articles get reverted. The path is editorial: secure 2–3 independent press mentions in industry publications, ensure a Wikidata entity exists (Q-IDs are free to create at wikidata.org), then submit through Wikipedia's Articles for Creation review process. Expect 3–6 months from first press mention to live article. Worth the effort: Wikipedia is the single highest-weight source for entity resolution in AI search. Q: What is markdown content negotiation? A: When a request includes the Accept: text/markdown HTTP header, your server returns the same content as clean Markdown instead of HTML. AI crawlers like ChatGPT and Perplexity prefer Markdown because the heading hierarchy is preserved, chrome (nav, footer, sidebar) is stripped, and the token footprint is about 30% smaller. PostHog and Beehiiv document this pattern in their llms.txt files. Implementing it in Next.js is roughly 30 lines of hook code. Q: Are FAQ schema rich results still supported by Google? A: Google deprecated FAQ rich results in their SERP listings in August 2023 for most sites, retaining them only for government and health sources. However, FAQPage schema is still actively consumed by AI engines — Perplexity, ChatGPT search, Bing Copilot, and Google AI Overviews all lift FAQ Q&A pairs into their generated answers. The schema is no longer a SERP-visibility lever; it is a citation-confidence lever. Ship it anyway. Q: Do I need an llms-full.txt in addition to llms.txt? A: Recommended for content-heavy sites. llms.txt is the site-index manifest (URLs + one-line descriptions); llms-full.txt is the concatenated full text of your key pages. AI agents that want to ingest your full content in a single request fetch llms-full.txt. Most documentation-heavy sites (PostHog, Beehiiv, Tinybird) ship both. Marketing sites with fewer than 20 pages can often get by with just llms.txt. Q: What is the single highest-leverage GEO move for a new site? A: It depends on stage. For a site shipping in the first month: structured data (item 4) plus llms.txt (item 3) plus IndexNow (item 14). Those three together get you indexed in Bing within hours, cited by ChatGPT within days, and entity-resolved against your brand name within the first week. For a site that already has those: the next highest-leverage move is a public /handbook URL (item 10) — only 2 of 20 indie peers ship one, the bar is sparse, and AI engines cite handbook content disproportionately because it is structured, persistent, and named. > THE SAME CHECKLIST, APPLIED TO JORBOX: We ran every item on this list against jorbox.com itself. The current state is documented in the Jorbox handbook and the pledge; the technical implementation lives in the public site. If you want a GEO audit on your own site, we offer it as one of four client services — same checklist, same standards we hold ourselves to. --- ## Why we still answer our own support tickets in 2026 URL: https://www.jorbox.com/blog/why-we-still-answer-our-own-tickets Summary: Every other agency our size has outsourced support to a call center by now. Here's the math on why we never will. {"blocks":[{"type":"raw_html","props":{"html":"When we started in 2012, answering your own support tickets was just how things worked. You built the site, you knew the client, and when something broke at 11pm you fixed it because nobody else knew the codebase. \nThen the industry changed. Agencies our size started outsourcing \"tier-1\" support to overseas call centers. The math seemed obvious: pay a low hourly rate to answer the first email, escalate only what's complicated. More margin, less friction, scale. \nWhy we didn't \nThree reasons, in order of importance: \n1. The person answering the email needs to know the code. Most hosting issues aren't \"my site is down\" — they're \"my form stopped sending email after I installed that new plugin.\" The fix takes five minutes if you know the stack. It takes two days of back-and-forth if you don't. \n2. Tickets are a leading indicator. Every ticket is a signal. If five people ask the same question this month, the product needs a fix — a docs update, a better error message, a default changed. Outsourced support filters those signals into a dashboard nobody reads. \n3. The clients who stay are the ones who feel heard. Our churn rate is under 3% annually. I'd bet a meaningful chunk of that comes from the fact that when you email us, you get a reply from someone who can actually fix the thing. \nWhat this costs us \nHonestly? Not as much as you'd think. We've built the habit of writing good docs (so fewer tickets show up in the first place), and we set aside the first hour of every day for inbox triage. Most things resolve in one exchange. \nWe're not saying this is the only way. Plenty of companies do great work with outsourced support. But for a small, opinionated team that wants to keep clients for a decade, answering your own tickets is still the right call. \nRelated: GEO audit checklist — what we put on the marketing side of the same operating philosophy. "}}]} --- ## Core Web Vitals in 2026: which one actually moves the needle URL: https://www.jorbox.com/blog/core-web-vitals-that-actually-matter Summary: LCP gets all the attention. But the metric that quietly drives conversion in modern browsers is something else. Google's March 2026 update bumped INP to a ranking factor and tightened the thresholds. Everyone's scrambling. Below is the order we'd prioritize if you only have time to fix one or two — based on what we see across the work we ship. The order we'd fix things If you can only chase one metric this quarter, chase these in this order: 1. INP (Interaction to Next Paint) — the replacement for FID. INP under 100ms feels instant. Above 300ms, the page feels broken even if it isn't. JavaScript that blocks the main thread is the usual culprit. 2. LCP (Largest Contentful Paint) — the one everyone talks about. Sub-1s LCP is the new floor for "feels fast". The jump from 2.5s to 1.5s usually matters more than the jump from 1.5s to 0.8s. 3. CLS (Cumulative Layout Shift) — less directly tied to conversion than we expected, but strongly tied to bounce on mobile when content jumps under a thumb. What's harder than it looks INP is hard because it's driven by JavaScript execution. Modern frameworks ship a lot of JS upfront, and if any of that blocks the main thread on first interaction, you're in trouble. The three biggest wins we see in practice: - Move analytics and marketing tags to