I’ve been blogging (first on TypePad, but shortly after on WordPress) since 2004. That’s 22 years of posts - 5,530 of them - plus books, films, running logs, and events. WordPress served me well for a long time. Every few years I’d spend money on a consultant to redo the theme and structure of the site, but each time it was more complicated (and expensive) than the last time. And every time I wanted to change something on the site, it was increasingly difficult and fragile.

Given how deep I’ve gone into Claude Code, I started exploring different approaches for managing web-facing content. As part of figuring out the AuthorMagic website creator, I discovered Hugo . I experimented using it for AdventuresInClaude and liked it, so - I decided to see if I could use Claude Code to do a full migration of Feld Thoughts to Hugo.

A day later, Feld Thoughts has a new home. While the theme is simple to start, I have full control over it and will iterate on it (in Claude Code) to get it in a form that I like.


The target stack is straightforward. Hugo generates a static site from markdown files. Vercel hosts it. I’ve been spending a lot of time with Markdown files lately and I enjoy working with them much more than anything that requires formatting. No more database, no PHP, and no WordPress updates. Just markdown files in a git repo.

The first step in the process was getting the content out. WordPress has a built-in REST API at yoursite.com/wp-json/wp/v2. Claude wrote a TypeScript script that fetches all posts via paginated API calls, fetches all categories and tags, and converts HTML to Markdown using the Turndown library. It strips WordPress block comments, handles caption shortcodes, and decodes all the HTML entities WordPress loves to scatter through titles and descriptions - smart quotes, em dashes, ellipses.

The script uses a state file for resumable checkpoints. If it crashes or gets rate-limited, it picks up where it left off. This turned out to be essential.


A key decision was the content structure. I used Hugo “page bundles” to preserve the WordPress URL structure:

content/archives/2012/10/random-act-of-kindness-jedi-max/index.md

This maps directly to feld.com/archives/2012/10/random-act-of-kindness-jedi-max/ - the same URL WordPress used. Every old link still works. No redirects needed.

Hugo’s configuration makes this explicit:

[permalinks.page]
  archives = "/archives/:year/:month/:slug/"

If you have custom post types - I had books, films, running logs, and events - those need separate exports since they live at different API endpoints. Each gets its own content directory.


The media download was tricky. WordPress CDN URLs come in a bunch of variants - i0.wp.com/feld.com, www.feld.com, direct feld.com paths, all with various query parameters. The media script scans every exported markdown file for these URLs, normalizes them, and downloads the actual images.

The clever part is the reference counting. Images used by only one post get co-located in that post’s page bundle directory. Images shared by two or more posts go to static/images/ with year-month prefixes to avoid filename collisions. After downloading, it rewrites all the markdown URLs from WordPress CDN paths to local relative paths.

A separate cleanup pass fixes HTML entities that survived in the frontmatter. WordPress stores stuff like & and ’ in titles and descriptions. These need to be decoded to actual characters, then the YAML strings need to be re-escaped properly. This is the kind of thing that sounds trivial but breaks in surprising ways when you have 5,530 posts.


Claude wrote a verification script that fetches the WordPress sitemaps and compares every URL against the Hugo content directory. It reports the match rate and lists any missing posts. We iterated until it hit 100% accuracy.

For the theme, I used PaperMod as a starting point and forked it. The fork lets me customize branding, layouts, and features without worrying about upstream updates. I added client-side search via Pagefind, which is essential for a site this size. Pagefind builds a static search index at build time. I added data-pagefind-body to the single post template so it only indexes post content - not navigation, footers, or other chrome. This dropped the index from 10,000+ pages to about 5,500 and cut the search index build time from 32 seconds to 8 seconds.


Deployment is simple. Push the Hugo repo to GitHub, connect it to Vercel, and point the domain’s DNS to Vercel. Every git push to main triggers a rebuild and deploy. My 5,530 posts build in about 47 seconds. The whole deploy - clone, build, CDN upload - is under 3 minutes.

The DNS cutover required documenting every existing record first - MX records for email routing, SPF, DKIM, and DMARC for email authentication, CAA records for SSL. I recreated all of them in Vercel DNS after the transfer.

I also switched from Mailchimp to Kit for email subscribers. Kit has RSS-to-email automation that watches the Hugo RSS feed and sends new posts to subscribers automatically. No API integration needed.


If you’re thinking about doing this, three things matter most. First, URL preservation is non-negotiable. Get the permalink structure right from the start so every old link works without redirects. Second, the media download is where things get messy - WordPress CDN URLs come in many variants, and you need reference counting to handle shared images correctly. Third, write a verification script and run it obsessively until you hit 100%.


I’ve open-sourced the migration scripts at github.com/bradfeld/wp-to-hugo . They’re genericized - you configure your site URL and custom post types in a single JSON file and the scripts handle the rest.

The toolkit has five scripts, meant to be run in order:

  1. wp-export - Fetches all posts and pages via the WP REST API, converts HTML to Markdown, and writes Hugo page bundles with proper frontmatter (categories, tags, descriptions).
  2. export-custom-types - Exports custom post types (books, films, whatever your site has) to separate content directories.
  3. wp-media-download - Scans all exported markdown for WordPress media URLs, downloads the images, and rewrites the URLs to local paths. Handles the reference counting (single-use images go in the page bundle, shared images go to static/images/).
  4. fix-entities - Cleans up HTML entities that WordPress stores in titles and descriptions (&, ’, smart quotes, etc.).
  5. wp-verify - Fetches your WordPress sitemap and compares every URL against the Hugo content directory. Run this until you hit 100%.

To use the scripts you’ll need Node.js (version 20+) and a WordPress site with the REST API enabled (most have it on by default - check by visiting yoursite.com/wp-json/wp/v2/posts). You’ll also need Hugo installed to build the site, a GitHub repo to store it, and a hosting platform like Vercel or Netlify to serve it. The scripts handle the content migration - setting up Hugo, choosing a theme, and configuring deployment is on you, but Hugo’s quick start guide covers most of it.

The scripts are resumable - if one crashes or gets rate-limited, re-run it and it picks up where it left off.

The repo’s documentation has a detailed walkthrough of how each phase works, including the media URL normalization strategy and the reference counting logic.


Claude Code did all the heavy lifting. I described what I wanted and it wrote the scripts, configured Hugo, set up the theme, and handled deployment.