My website was built on WordPress, running the trusty old Divi theme, with the usual entourage of plugins. On the surface, it worked. But under the hood, it felt like renting a warehouse to store a backpack. Just putting out a blog post required navigating blocks inside shortcodes inside modules wrapped in layout builders. While it did the job, the code was too bulky, and making even basic changes—like posting a blog—started to feel unnecessarily complicated. I've worked with a fair share of website builders over the years, and I knew one thing: if you want customization and flexibility, they make you pay for it—not with money, but with time, workarounds, and limits you didn't know were there until you hit them.

Since I was already experimenting with modern AI tools, I had an idea: what if I built the site with AI? Not using one of those shiny new \"AI website generators\" that drop in a few stock images and center-aligned headings and call it a day. I'm talking about sitting down with Claude AI and writing the actual code—HTML, CSS, JS, PHP. The plain old kind.

Would it work? Could Claude deliver something usable, lightweight, and flexible? It would certainly be cheaper. But could it also be good?

I had a clear set of rules for myself:

  1. Keep the codebase clean, minimal, and optimized—so I could easily make changes myself, or through AI.
  2. Avoid bulky CMS platforms and frameworks (they have their benefits, but not for what I needed).
  3. Build my own lightweight framework for managing blog posts and, in future, video content—something simple, fast, and easy to maintain.
  4. Retain complete control over the code, layout, and user experience—every div and line should be understandable.
  5. And most importantly, test whether AI can actually co-develop a site without needing detailed engineering prompts or hand-holding.

Because I knew that if this worked—if I could build and control the foundation—then adding major features later would be easy. Just add a prompt, do a bit of back-and-forth, and ship.

The result is funnelysis.com: my new home for articles, videos, courses, AI-built tools, and client work. There's no React, no Tailwind, no WordPress. Just a lean, hand-rolled setup built with plain PHP, CSS, HTML, and JavaScript—stitched together to form a framework that's fast and clean.

But let's be honest. That 1MB of generated website code didn't happen overnight. It took 45 conversation threads and around 35 hours of back-and-forth to get that slim little codebase into shape. Which is to say: no, it's not just "type a prompt and get a website." It takes guidance, iteration, and a surprising amount of architectural thinking.

Is it a complete, production-grade system? Yes and no. It works beautifully for what it's meant to be—a lean, custom site with just enough interactivity—but there's always room to grow. Still, it's the most control and clarity I've ever had on a website I use daily. The page load speed scores are almost perfect on Desktop and impressive on Mobile. AI has done my SEO work as well - Schema, meta tags, keywords, etc. etc. Even 30-40% content is done with AI.

Also, how did this blog you're reading come together? I copied the content of 45 long conversation threads with Claude into a dozen Notepad files, then asked ChatGPT to extract moments where I was clearly trying to figure something out. I'd say: "Hey, at this point, I was trying to do X, then this happened," and ChatGPT would build on that. We kept refining until the tone and structure felt right. Of course, the output and the narrative arc have been heavily edited and choreographed by me—but ChatGPT was exceptional at catching my intent and polishing it with clarity.

And along the way, I learned a lot. Some things were expected. Others… not so much. Here are 11 insights from that process—shared for anyone else wondering what it's really like to build with AI.

1. Even a smart assistant needs clearer instructions than humans do

If you're vague, AI will guess. And it's usually confidently wrong.

Summary: When collaborating with AI, instructions that feel intuitive to humans often need to be spelled out like IKEA manuals—explicit, structured, and step-by-step.

AI needing clear instructions - visual representation of communication challenges with AI
AI assistants thrive on explicit instructions rather than implicit understanding

Full Anecdote:

There was a moment while editing a web page where I needed to remove just one specific section. I thought I was clear. I even used polite, decisive language: "Please remove this section and leave everything else as is." Claude, my AI assistant, agreed. But its interpretation was... enthusiastic.

Instead of removing the designated portion, it quietly deleted the entire testimonial. I tried again. It then altered something else. The instruction, it seemed, needed to evolve from a gentle nudge to a courtroom-style exhibit.

Eventually, I highlighted the exact HTML block, line by line, almost like guiding a very intelligent but over-cautious intern: "This is the furniture. That's the chair I'm talking about. Please don't sell the couch."

It worked.

What I realized is this: smart AI thrives on clarity, not casual intent. There's no such thing as "you know what I mean" in the world of structured markup. But the upside? It never lost patience. We debugged together, one div at a time, and emerged slightly wiser—both of us.


2. Tiny design details become hard to fix when AI fixates on the wrong idea

Once it locks onto a flawed solution, you’ll need a scalpel—not a sledgehammer—to redirect it.

Summary: AI doesn’t always get the visual idea wrong—but when it does, it gets stubborn. Fixing the smallest design misfires often takes more targeted prompting than you’d expect.

Visual representation of AI fixation on design ideas
When AI locks onto a design approach, redirection requires precise prompting

Full Anecdote:

One day, I just wanted to standardize the bullet points. That’s it. Swap the default black dots for soft yellow checkmarks inside circular backgrounds—simple, elegant, consistent with the rest of the design.

Claude jumped in eagerly, but instead of reusing a working pattern from another section, it came back with something entirely different: oversized icons, aggressive spacing, a mini treatise on list aesthetics. I corrected it. Claude re-explained its reasoning. I reworded the prompt. It reworded the CSS. We looped like this a few times.

Eventually, I had to be surgical: *“Use the exact style from the course cards. Don’t reinterpret it. Don’t upgrade it. Just copy it.”* That worked. Suddenly, the checkmarks clicked into place, the spacing aligned, and everything looked like it always belonged there.

It wasn’t that the AI didn’t know how to do it. It just wouldn’t let go of its first idea. And when that happens, you can’t brute-force a fix. You have to gently reframe, restate, and steer—like debugging a conversation, not a function.

AI can be brilliant with design. But when it grabs onto the wrong visual idea, your biggest tool isn't new code—it's sharper direction.


3. Taste isn’t programmable—but it is teachable

AI learns design sensibilities through patient feedback, not prompts alone.

Summary: Even in automation, taste isn't programmable. It's teachable—through feedback, iteration, and care. And sometimes, that care looks like nudging a background from 'subtle' to 'visible' one pixel at a time.

Visual representation of teaching AI about aesthetic taste
Teaching AI aesthetic sensibilities through iterative feedback

Full Anecdote:

While redesigning the homepage, I asked Claude to replicate a specific checkered background pattern that I had used on the original site. It was a subtle visual motif—barely noticeable, but part of the site's DNA.

Claude understood the assignment. Almost.

The first version looked... soft. Decorative. Like a pleasant tablecloth at a café. It had diagonal lines, but they were too faint, too delicate. I asked for stronger visibility, a touch more boldness. Claude adjusted. Then adjusted again. Each iteration brought us closer to the original—but it was like teaching someone to recreate a family recipe from taste alone. The AI nailed the ingredients, but the texture was always slightly off.

Eventually, we got there. The tiles sharpened. The background gained the right level of personality without becoming loud. But what stood out was the process: it wasn't just about CSS—it was about interpreting style, sensitivity, and the fine line between subtle and forgettable.


4. AI can write code, but it can’t see how it renders—until you show it

It assumes the code looks good unless you tell it otherwise.

Summary: Even when the code is correct, AI has no sense of how things look in the browser. From visibility issues to layout oddities, it’s your eyes—not the AI’s—that complete the loop.

Visual representation of AI writing code vs seeing rendered results
The gap between code generation and visual rendering in AI collaboration

Full Anecdote:

At one point, I wanted to showcase client logos in a horizontal row on my site—something clean, flowing, not too flashy. Think of a subtle marquee that quietly says: "Here's who we've worked with."

Claude obliged. The first version worked, technically. The logos were scrolling in a line. But the gray overlay dulled them down, and some logos—especially those with white text—vanished into the background like ghosts at a meeting they weren't invited to.

I asked for revisions: lighter background, full color logos, better contrast. Claude responded with a more elegant version. Still, one logo appeared smaller than the rest, while another felt too loud. It became a matter of proportions, hierarchy, balance.

It started as a basic design element and turned into choreography. Each logo needed space. Visibility. Equal footing. The solution? A translucent background, custom logo sizing, and a marquee that felt less like a news ticker and more like a soft glide.

In hindsight, I hadn't just been trying to "show logos." I was trying to tell a quiet success story. Claude understood that—not all at once, but through collaboration. Turns out, when each pixel has a job, AI can be surprisingly good at assigning roles.


5. One tiny script can quietly break five other things

Code dependencies are invisible to AI—until something crashes and you point it out.

Summary:

Sometimes, the tiniest additions to a codebase can set off a delicate chain of dependencies—and AI needs to be reminded that code isn't written in a vacuum.

Visual representation of code dependencies and unintended effects
One minor code change can create a ripple effect of unexpected interactions

Full Anecdote:

I wanted to add a simple animation script to an existing file—nothing elaborate, just a bit of interactivity to bring a services section to life. Claude offered to help, and the implementation looked clean. It appended the function neatly at the end of the DOMContentLoaded block and even handled error checking gracefully.

But when I loaded the site, parts of another section—totally unrelated—started behaving strangely. A carousel didn't load. Testimonials stopped rotating. For a moment, it felt like I'd added a single spoonful of spice and ruined the whole dish.

Turns out, the function names used in the animation overlapped subtly with those in the original file. Nothing clashed outright—but things broke silently. The kind of break you only notice when something doesn't happen, like a bus that never arrives but also never reports a delay.

The fix was straightforward: more isolation, stricter naming, and a bit more awareness that this wasn't a fresh script file—it was part of an ecosystem. Claude adjusted quickly once the context was spelled out.

The takeaway? AI can generate elegant code, but it doesn't naturally read the "room" of a project. It doesn't assume that one innocent line might knock over something six functions away.


6. AI builds components that shine solo—but stack them together and things fall apart

Duplicate IDs, global styles, conflicting scripts… coexistence still needs a director.

Summary: AI treats every component like it's the only one on stage. Unless you remind it to look left and right, it’ll reuse IDs, styles, and scripts like no one else is around.

Visual representation of component isolation vs integration challenges
Components built in isolation often clash when integrated together

Full Anecdote:

I had a testimonial block working beautifully. Claude helped me build it with flexible logic—enable or disable the header, toggle the call-to-action, keep it clean and DRY. It worked great on the first page.

Then I reused the same file elsewhere—on a page that already had a different testimonial section. That’s when things quietly broke.

The second block started behaving oddly: headings disappeared, layouts collapsed, carousels refused to scroll. The components weren’t colliding because of logic—they were colliding because Claude reused the same IDs, class names, and function scopes across both. As far as it was concerned, this component was the only one in existence.

It hadn’t factored in that these blocks might share space.

The fix wasn’t hard—scoping styles, namespacing variables, separating init scripts. But it was revealing. Claude had written solid code. Just not coexisting code.

That’s the catch: AI doesn’t account for a broader cast of characters on your page. It needs to be told who’s already in the room.


7. Form-to-database integration? That’s where AI fumbles

The logic is simple, but getting it to work cleanly takes rounds of nudging.

Summary:

Getting a form to submit is one thing—getting it to speak fluently with a structured system like Notion takes patience, a working fallback, and a willingness to debug what no console will admit went wrong.

Visual representation of form-to-database integration challenges
The subtle challenges of making forms communicate with databases

Full Anecdote:

It started with a simple goal: when someone fills out a form on the site, their details should land neatly inside a Notion database. The logic was sound, and the intention was straightforward. But getting there? Far from simple.

The first version looked perfect—until it wasn't. No errors. No responses. Just radio silence from Notion. The data was vanishing somewhere between browser and database, and the tools weren't exactly volunteering answers.

So I asked ChatGPT to strip everything down and just give me the most basic working version—a form that posted a name and email to Notion. That finally worked, but only after a few rounds of trial and error. Once that minimal version clicked, I used it as a bridge—feeding the learnings back into the more complex implementation I had in mind.

Then came the edge cases. Optional fields causing silent failures. Submissions showing up without source labels. Notion refusing to accept input without ever saying why. Each adjustment taught me something—not just about the payload structure, but about how quietly a system can reject input when something is even slightly off.

Eventually, everything held together. The data arrived. The sources were tagged. Even optional fields cooperated. But it took a working sample, patient iteration, and a surprising amount of diplomacy to get the integration to feel natural.


8. Some problems reveal themselves only when humans notice them first—like caching

AI doesn’t spot the “it’s working for me” problem until you ask, “Is it working for everyone else?”

Summary:

Some problems go unnoticed by both humans and AI—until testing exposes them, and the AI responds with a sheepish "oops" followed by a remarkably solid fix.

Visual representation of caching issues discovered by humans
Some issues, like caching problems, only become apparent through human observation

Full Anecdote:

The site had gone through a few visual updates—minor spacing tweaks, cleaner button styling, a hover effect here and there. I hit refresh. Everything looked great. Until someone else viewed the same page and saw… the old version. No updated styles. No animations. Just a polite, pre-makeover version of the website, cached comfortably in their browser.

I brought it up with Claude. No drama, just a quiet observation: "Looks like we're not busting the cache."

That's when the shift happened.

Almost immediately, Claude acknowledged it—no denial, no over-explanation. Just a pivot straight into solution mode. It introduced the idea of using file-based versioning: appending the file's last modified timestamp as a query parameter. Clean, effective, and honestly more robust than I'd expected from a "forgot to mention this" moment.

It didn't stop there. It offered multiple options: automatic versioning tied to file changes, global version numbers for major releases, even cache-control header strategies to complement the versioning. Each solution came annotated, structured, and production-ready.

All of it worked. Once implemented, the update lag disappeared. Users stopped seeing outdated layouts. No more "try hard-refreshing" instructions. The fix was quiet, elegant, and permanent.

What stood out wasn't the oversight—it was the speed and quality of the recovery. AI may miss a detail the first time, but once you point it out, it rarely needs to be told twice.


9. Recognizing reusable code? Still a human job to identify and notify the AI

Unless you call out the repetition, it’ll just keep copy-pasting.

Summary:

AI is great at writing clean code—but recognizing repeated patterns and suggesting reusable components still requires a human to connect the dots.

Visual representation of code reuse patterns
Recognizing patterns for code reuse requires human insight to guide AI

Full Anecdote:

As more pages were added to the site—marketing services, courses, resources—the layout started to feel familiar. And for good reason: the code in each file was nearly identical. Backgrounds, font declarations, color variables, header logic—often copy-pasted verbatim across files.

What stood out wasn't inconsistency. It was repetition. Claude had helped build each page individually, and it treated each as a standalone assignment. It didn't notice that the header structure had already been written. Or that the background pattern code was showing up line-for-line in three separate places.

Even after I pointed out that the same code existed in earlier files, it often responded with a fresh copy anyway. The instinct wasn't to refactor. It was to rebuild.

Eventually, I had to say it plainly: "Let's extract this section and make it a reusable include." Once that instruction landed, the system fell into place. Shared logic moved into dedicated files. Layout became modular. Duplication disappeared.

But the insight stayed with me: AI sees the task in front of it. It doesn't automatically recognize the forest if you only ask it to plant a tree. Abstraction isn't its default. Reuse has to be declared.


10. Sometimes AI skips ahead and just gets it right

Minimal prompt, maximum payoff. Rare—but very satisfying.

Summary:

Every now and then, AI doesn't just respond—it anticipates the full picture, offering a fix that's smart, complete, and feels like it should've taken three iterations instead of one.

Visual representation of AI getting it right the first time
Sometimes AI surprises with complete, elegant solutions on the first try

Full Anecdote:

I knew what needed to be done—implement clean, human-friendly URL slugs using .htaccess. I understood the solution. But Claude just killed it.

All I said was: "Can we do something about this? I want custom URL slugs." That was enough.

Claude immediately returned a full plan. It rewrote the .htaccess file to remove .php extensions, created slugs like /analytics, /courses, and /about-us, and redirected legacy URLs cleanly. Then it updated the internal navigation to reflect the new structure, making sure no links broke—and even threw in a documentation-style guide to walk through the changes.

There was no back-and-forth. No partial fix. Just a clean, complete rollout.

Feeling confident, I casually mentioned: "Let's also set up a robots.txt file." Another mundane but necessary piece of infrastructure. Claude didn't blink.

It drafted a robots.txt that was... thoughtful. It explicitly allowed the root directory, blocked unnecessary paths, and matched the new slug structure from the URL rewrites without needing a reminder. It even anticipated the need for a clean sitemap structure and proposed a format that prioritized key pages, while leaving out utility paths.

It was one of those rare AI moments where everything aligned—problem, intent, and output. I knew what I wanted, and Claude delivered it in one tight, precise package. A site that looked sharper, read cleaner, and worked better, all because one command was understood not just literally, but systemically.


11. AI's real talent? Offering a zillion creative ways to solve the same problem

You’re the editor. AI is the brainstormer.

Summary:

More than just generating fixes, AI's real strength sometimes lies in offering tailored solutions—each with context, trade-offs, and the clarity to let you decide what fits your needs.

Visual representation of multiple creative solutions
AI shines in offering multiple creative approaches to the same problem

Full Anecdote:

When I set out to make the footer reusable across my site, I expected Claude to give me a ready-made answer. Instead, it offered two thoughtful implementation routes—JavaScript injection and PHP includes—each explained with pros, cons, and real use cases. No overkill, no one-size-fits-all solution. Just options, designed to fit different needs.

This pattern repeated itself again and again.

When I faced a Largest Contentful Paint (LCP) issue caused by animated bubble backgrounds, Claude didn't just point fingers at CSS complexity. It gave me five distinct fixes. From completely deferring the animation load using requestIdleCallback, to lazy-loading it with a 1x1 invisible image, to conditionally removing animations on mobile—every suggestion came with trade-offs and fallback logic. It even admitted when one option had a flaw (like saying "loading='lazy'" but not using it) and corrected itself.

Then there was the Adobe font issue. Claude proposed everything from inlining critical font styles, to JavaScript-based font loading, to header-based preloading using media tricks, to swapping in system fonts until Brandon Grotesque was ready. Each approach came with a performance rationale and layout implications, including variations for mobile performance and PageSpeed compliance.

What stood out wasn't just the range. It was the thoughtfulness. Claude doesn't push a fix—it sets a table. And that kind of design thinking, from an assistant that's supposed to "just generate code," made it feel more like a collaborator than a tool.


Conclusion: Would I Do It Again?

So… what started as a "let's see if this works" turned into 45 threads, 35 hours, and just over 1 MB of code. I didn't expect it to take that long, honestly—but then again, I wasn't just pasting prompts into a generator and watching magic happen. I wanted control. Simplicity. Something I could understand, break, and fix without crying.

Claude helped. Sometimes it nailed things in one go. Other times, it looped back like a polite intern who keeps rearranging the chairs instead of moving the table.

Would I recommend building a site with AI? If you've got patience, a bit of code sense, and zero appetite for bloated frameworks—yes. Just don't expect a 5-minute miracle. You'll still have to do the thinking.

This whole process also made me rethink how generative AI could be used to build actual products—not just websites. I know this is happening as we speak, but after spending those hours, it feel more real now.

Thanks for reading! May your code be clean, your interfaces intuitive, and your AI collaborations productive.