Published on

Building a Privacy-First Analytics Dashboard for My Blog

Authors

The Problem with Traditional Analytics

When I launched atruedev.com, I faced a dilemma that many developers encounter: How do I understand my audience without compromising their privacy?

Google Analytics? Too invasive. Plausible? Great, but another subscription. Umami? Self-hosting overhead.

I wanted something that:

  • Respects user privacy (no cookies, no fingerprinting)
  • Provides real insights (not just vanity metrics)
  • Integrates seamlessly with Next.js
  • Costs nothing to start
  • Scales with my blog

So I built my own. Here's how.

The Architecture

The analytics system consists of four main components:

  1. Client-side tracking hook - Lightweight, non-blocking
  2. Edge API endpoints - Fast data ingestion
  3. Storage abstraction - In-memory for dev, Vercel KV for production
  4. Admin dashboard - Real-time insights with auth

Key Design Decisions

No Cookies, Anonymous Visitors

// Generate anonymous visitor ID from IP + User-Agent
const visitorId = crypto
  .createHash('sha256')
  .update(ip + userAgent + new Date().toDateString())
  .digest('hex')

Engagement Tracking

// Track real engagement, not just page loads
const metrics = {
  scrollDepth: Math.round((scrollY / scrollHeight) * 100),
  readTime: timeOnPage,
  engaged: scrollDepth > 30 && timeOnPage > 30
}

Storage Abstraction

// Seamlessly switch between dev and production
const storage = process.env.KV_URL 
  ? new KVStorage() 
  : new InMemoryStorage()

Building the Analytics Hook

The heart of the system is a React hook that automatically tracks page views and engagement:

export function useAnalytics() {
  const trackPageView = useCallback(async (path, title) => {
    try {
      await fetch('/api/analytics/track', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
          type: 'pageview',
          data: { path, title, referrer: document.referrer }
        })
      })
    } catch (error) {
      // Fail silently - analytics should never break the app
      console.error('Analytics error:', error)
    }
  }, [])

  // Track engagement metrics
  const trackEngagement = useCallback(async (data) => {
    // Use sendBeacon for reliability
    const blob = new Blob([JSON.stringify({
      type: 'engagement',
      data
    })], { type: 'application/json' })
    
    navigator.sendBeacon('/api/analytics/track', blob)
  }, [])

  return { trackPageView, trackEngagement }
}

The API Layer

The tracking API is designed to be fast and fault-tolerant:

// pages/api/analytics/track.js
export default async function handler(req, res) {
  if (req.method !== 'POST') {
    return res.status(405).end()
  }

  const { type, data } = req.body
  const storage = getStorage()
  
  // Generate anonymous visitor ID
  const ip = req.headers['x-forwarded-for'] || 'unknown'
  const userAgent = req.headers['user-agent'] || 'unknown'
  const visitorId = generateVisitorId(ip, userAgent)

  try {
    switch (type) {
      case 'pageview':
        await storage.trackPageView(
          data.path, 
          data.title, 
          visitorId, 
          data.referrer
        )
        break
      case 'engagement':
        await storage.trackEngagement(
          data.path,
          visitorId,
          data.scrollDepth,
          data.timeOnPage
        )
        break
    }
    
    res.status(200).json({ success: true })
  } catch (error) {
    // Log but don't fail
    console.error('Tracking error:', error)
    res.status(200).json({ success: true })
  }
}

Storage Strategy

For local development, I use in-memory storage. For production, Vercel KV:

class KVStorage {
  async trackPageView(path, title, visitorId, referrer) {
    const date = new Date().toISOString().split('T')[0]
    const pipeline = kv.pipeline()
    
    // Increment page view counter
    pipeline.hincrby(`pv:${date}:${path}`, 'views', 1)
    
    // Track unique visitors
    pipeline.sadd(`visitors:${date}:${path}`, visitorId)
    
    // Track referrers
    if (referrer) {
      pipeline.zincrby(`referrers:${date}`, 1, referrer)
    }
    
    // Set expiration (30 days)
    pipeline.expire(`pv:${date}:${path}`, 30 * 24 * 60 * 60)
    
    await pipeline.exec()
  }
}

The Dashboard

The admin dashboard provides real-time insights:

export default function AnalyticsDashboard() {
  const [stats, setStats] = useState(null)
  
  // Auto-refresh every 30 seconds
  useEffect(() => {
    const interval = setInterval(fetchStats, 30000)
    return () => clearInterval(interval)
  }, [])

  return (
    <div className="grid grid-cols-1 gap-8 lg:grid-cols-2">
      <StatsCard 
        title="Total Page Views"
        value={stats?.pageViews?.total || 0}
        subtitle="Last 7 days"
      />
      
      <PageViewChart data={stats?.pageViews?.byDate || {}} />
      
      <TopPages pages={stats?.pageViews?.topPages || []} />
      
      <EngagementMetrics 
        engagement={stats?.engagement || {}} 
      />
    </div>
  )
}

Authentication Without the Hassle

Instead of basic auth (which shows an ugly browser popup), I implemented JWT-based authentication:

// Simple JWT implementation
export function generateToken(payload) {
  const header = btoa(JSON.stringify({ alg: 'HS256', typ: 'JWT' }))
  const payloadWithExp = {
    ...payload,
    exp: Date.now() + (24 * 60 * 60 * 1000) // 24 hours
  }
  const body = btoa(JSON.stringify(payloadWithExp))
  const signature = createHmac('sha256', process.env.JWT_SECRET)
    .update(`${header}.${body}`)
    .digest('base64url')
  
  return `${header}.${body}.${signature}`
}

Deployment with Vercel KV

Setting up Vercel KV is straightforward:

  1. Create KV database in Vercel Dashboard
  2. Connect to your project
  3. Pull environment variables locally
  4. Deploy!

The free tier includes 3,000 requests/day - plenty for most blogs.

Performance Impact

The entire analytics system adds:

  • 0KB to your bundle (server-side only)
  • < 1KB for the tracking hook
  • ~5ms per page view (async, non-blocking)
  • No impact on Core Web Vitals

What's Next?

Now that I have the data, I'm building AI-powered insights:

// Coming soon: AI content insights
const insights = await analyzeContentPerformance(
  articleSlug,
  articleContent,
  analyticsData
)

// Predict performance before publishing
const prediction = await predictContentPerformance(
  title,
  content,
  historicalData
)

Key Takeaways

  1. Privacy first - You don't need invasive tracking for useful analytics
  2. Start simple - Basic metrics are often the most valuable
  3. Own your data - Building it yourself gives you full control
  4. Progressive enhancement - Start with in-memory, scale to KV
  5. Fail gracefully - Analytics should never break your site

Try It Yourself

The entire analytics system is open source as part of the atruedev.com repository. Feel free to adapt it for your own projects!

Key files to check out:

  • /lib/analytics/useAnalytics.js - Client-side tracking
  • /pages/api/analytics/track.js - Data ingestion
  • /lib/analytics/storage.js - Storage abstraction
  • /pages/admin/analytics.js - Dashboard UI

Conclusion

Building your own analytics doesn't have to be complex. With 200 lines of code, you can have a privacy-respecting, insightful analytics system that grows with your blog.

No cookies. No tracking scripts. No privacy concerns. Just clean, actionable data.

What metrics matter most for your blog? Let me know in the comments!