CASE STUDY

Lumo

An AI Mouse for Contextual Productivity

Designing a contextual AI interaction system that lives at the input level, helping people work faster across applications without interrupting flow.

My Core Ownership

  • Mouse AI interaction model

  • Developed interaction patterns for voice + text input within the menu system

  • Designed context engineering framework for intelligent action prioritization

  • Built comprehensive AI action reference sheet and prompt taxonomy

Duration

3 months

Team

Lidi Fu (UX Designer)
Callum Buchanan (UI Designer)
Shaheer Ahmed (Developer)
Dominik Donocik (Lead Experience Designer)

OVERVIEW

Quick Context

LP Intelligence is a system-level effort to define how AI should behave across LP products.

My focus within this initiative was the Lumos AI mouse — exploring how intelligence can live at the input layer, rather than inside individual applications..

Instead of building another AI app or assistant, the goal was to design a mouse experience that:

  • understands what the user is doing

  • surfaces the right tools at the right moment

  • works across applications

  • and disappears when not needed

THE PROBLEM

Productivity friction lives between applications

Modern work constantly moves across tools — copying, transforming, summarising, and reusing content.

These moments introduce friction:

  • frequent app switching

  • repetitive copy–paste actions

  • hunting through menus for the right tool

  • AI features buried inside apps or chat panes

Most AI tools today live inside applications, forcing users to break flow and adapt their behaviour.

What if AI lived at the point of intent — where selection and action already happen?

DESIGN INTENT

Streamline everyday workflows without creating new ones

The mouse becomes a context-aware gateway, not a destination.

The mouse experience was guided by a few clear ideas:

Work where users already work

No new destinations, no mode switching.

Exist between apps, not inside them

Support cross-app workflows rather than replicating app features.

Be contextual, not comprehensive

Fewer options, chosen well.

Stay transient

Appear when useful, disappear immediately after.

DESIGN DECISIONS & CORE WORK

Designing for Intent

The AI mouse uses a single haptic side button with pressure-based input:

Gentle press

object-level actions (text, image)

Press+drag

capture-based actions (region, window)

This avoids:

  • predictable

  • fast

  • non-overwhelming

While still scaling to multiple intents.

Trade-offs & Constraints

Trade-offs & Constraints

Intelligence at the input level

The mouse becomes a context-aware gateway, not a destination.

A context-aware micro-interface that emerges from the mouse itself, providing intelligent shortcuts and AI-powered actions based on:

What you've selected (text, image, multimodal content)

Where you are (active application and workflow)

What you're doing (current task and historical patterns)

Picking up text or images

Users can select content across any app and trigger relevant AI actions directly from the cursor.

Marquee selection

A lightweight capture tool enables multimodal input (image + context), unlocking richer AI suggestions.

Contextual shortcuts

When nothing is selected, the menu adapts to the active window, offering useful cross-app shortcuts.

Menu Exploration

Two menu directions for the mouse were explored

Route 1 - Tabbed Menu

A structured approach separating input, AI actions and app shortcuts


This offered clarity, but added friction for quick interactions by requiring explicit navigation.

Route 2 — Dynamic Menu (Preferred)

A single adaptive menu that prioritises the most relevant actions based on context and reveals more options progressively.

This route aligned better with the goal of keeping the AI lightweight, fast, and responsive to intent.

Menu Structure

By removing explicit navigation and relying on context instead, the menu is faster, more responsive and scalable across multimodal.

Menu Transformation

Input is designed to be flexible and optional

Voice and text feed into the same system logic, allowing users to move fluidly between input modes without changing tools or context.

Hover over input field

User can also choose to type or speak

Long press to speak

AI starts to listen and transcribe

Analysing

AI reads the prompt and performs action

Prompt Engineering

Rather than prompts or open chat, the system evaluates context first:

  • selected text

  • selected image

  • captured region or window

  • repeated user behaviour

Only then does it surface a small, focused set of relevant actions.

This keeps the experience:

  • predictable

  • fast

  • non-overwhelming

Cursor-First, Transient UI

Because the mouse is not an app, the UI needed to be:

  • embedded at the cursor

  • visually distinct from the app underneath

  • transient by default

The AI menu:

  • appears where the user is working

  • sits on top of existing UI

  • disappears immediately after use

No persistent panels. No new workspace.

Mapping the Mouse Experience

Object Selection → Contextual Actions

When text or an image is selected:

  • user taps the AI button

  • a focused Gen-UI menu appears

  • actions adapt to the object type (rewrite, summarise, transform, etc.)

Marquee Capture → AI Augmentation

The AI menu:

  • a region is captured

  • AI analyses the content

  • relevant actions are suggested (e.g. improve, summarise, upscale)

The result is copied back to the clipboard for immediate reuse.

IMPACT

What happened & what I learned

What shipped

  • xx

What Worked

  • xx