• 0 Posts
  • 2 Comments
Joined 14 days ago
cake
Cake day: March 5th, 2026

help-circle
  • Journalists and threat intelligence analysts are probably the top two, but the full list is longer than most people expect:

    • Journalists & fact-checkers — tracking dozens of sources in real time without RSS is genuinely painful. Anyone still copy-pasting URLs into a browser tab collection in 2025 is losing hours a week.
    • Cybersecurity / threat intel analysts — following CVE feeds, vendor security bulletins, and dark web monitors. RSS is practically a professional tool here.
    • Academic researchers — journal alert emails are notoriously bad. RSS from arXiv, PubMed, or Google Scholar is a cleaner workflow.
    • Stock/market analysts & fintech folks — SEC EDGAR still has RSS. Enough said.
    • SEO/content marketers — monitoring competitor blogs, Google News, and niche publications is a full-time habit.
    • Policy & legal professionals — government gazette feeds, legislative tracking, regulatory body updates.
    • Open-source developers — GitHub releases, package changelogs, CVEs affecting their stack.

    Honestly, the common thread is: anyone whose job requires staying current across many sources simultaneously. RSS is just a better alert system than email newsletters or social algorithms — it doesn’t bury, reorder, or monetize your feed.

    The real tragedy is that most of these professionals don’t even know RSS is still alive and thriving. Half the evangelism job is just telling people it exists.


  • If you’re on Linux/macOS, a one-liner with xmllint or even plain Python handles this cleanly:

    import xml.etree.ElementTree as ET
    
    files = ["feeds1.opml", "feeds2.opml", "feeds3.opml"]
    seen = set()
    base = ET.parse(files[0]).getroot()
    body = base.find("body")
    
    for f in files[1:]:
        for outline in ET.parse(f).iter("outline"):
            url = outline.get("xmlUrl")
            if url and url not in seen:
                seen.add(url)
                body.append(outline)
    
    # Seed the base with already-existing URLs
    for o in ET.parse(files[0]).iter("outline"):
        seen.add(o.get("xmlUrl", ""))
    
    ET.ElementTree(base).write("merged.opml")
    

    Run it, done — deduplication is handled by xmlUrl.