With the recent launch of some interesting looking 'AI powered' design tools, I saw an opportunity to rethink the entire UX design process. And with the challenge of building a new AI powered ESG (Environmental, social, and governance) product, in just three months 🤯, I saw this as another opportunity to pilot and test a new AI driven design process using Vercel's v0.
Our new AI product helps businesses comply with ESG (Environmental, Social, and Governance) regulations more efficiently. It gives compliance experts easy access to relevant information and provides insights to guide their actions.
Vercel's v0 is an AI-powered tool that generates code based on text prompts, much like using Chat GPT or Gemini. v0 creates an interactive design based on text prompts and even recreates design based on uploaded images.
The goal is to discover if designing with AI speeds up design, helps me be more creative, or simply hinders progress and leads to unnecessary complexity. I’m aware that what I write here will change rapidly, so let's hang on to our seats and get started!
The new ESG product needed to be intuitive and engaging for users while effectively communicating complex data and insights. So when it came to actually starting concepts and user flows, this presented two key challenges:
It was a scary thought, as I have used Figma and similar 'traditional' design tools for at least 15 years. Yet, I wanted to explore designing without relying on these traditional tools, pushing the boundaries and moving away from pixel-pushing and connecting clickable prototypes. Is prompting a craft like managing elements on a screen? I am not sure yet, but it’s a means to an end.
With v0 generating production like ready code, I needed to create a streamlined handover process for engineering that maximised the efficiency for collaboration.
To fully embrace this new way of designing, I immersed myself in v0, exploring its capabilities and pushing its limits. I had looked at other AI tools such as Lovable and Bolt, but as luck would have it, our engineering team were using Vercel, and v0 was a key part of the ecosystem. Fortunately, v0's use of familiar technologies like Tailwind, a highly flexible CSS framework and shadcn/ui, a React based UI component library eased the transition.
👉 Be advised, both v0 and many of these tools use shadcn/ui and Tailwind for consistent UI, which makes selecting patterns easier, but customisation can be challenging if their default styles don't align with your vision.
First version of prompt for a simple search
Before diving into v0, the UX team had conducted extensive customer discovery sessions in the previous months with P&L owners and compliance executives. Basically, our vision was to design from the top down. These sessions yielded valuable insights into user needs and pain points.
These insights were crucial in shaping the design of the new product. With v0, I uploaded as much feedback into the project to incorporate it into prototypes and generate new versions, ensuring a user-centered design throughout the process.
To further refine our understanding, we employed the Jobs to Be Done (JTBD) framework. This involved identifying the "jobs" users were "hiring" our product to do. Some key jobs included:
This JTBD analysis, combined with the user feedback, helped us prioritize features and ensure the design directly addressed user needs. The beauty of AI tools like v0 is that they allow for rapid iteration, and bake these needs into the design.
Since we are building with AI for an AI system it's reasonable to expect that there are principles to guide us and also v0. I established a set of guiding principles to ensure responsible design throughout our process and I fed these principles into v0.
Use a human-centred approach: Determine if AI adds value. AI should improve the user experience or solve real problems in the compliance domain, such as automating repetitive tasks, identifying potential risks, or providing insights from complex regulations.
Use multiple outputs: Recognise the inherent variability of generative AI: When a user inputs product specifications, the AI might generate different compliance strategies over time as regulations evolve or new information becomes available.
Teach effective use: Explain the benefits, not the technology: When explaining our AI, focus primarily on conveying how it makes part of the experience better or delivers new value, vs explaining how the underlying technology works.
Support co-editing of outputs: Give control back to the user when automation fails: This enables users to take over and correct or refine the AI's output.
Calibrate trust with explanations: Explainability and trust. Explainability and trust are essential in the compliance domain. Users need to understand how the AI arrives at its conclusions and recommendations.
Offer ways to improve outputs: Errors and graceful failure. Acknowledging and handling errors gracefully is crucial for designing robust AI systems.
Recognising different ways of interaction: Provide alternate inputs. Beyond text-based interaction, consider how voice commands, image recognition, or even augmented reality (AR) could enhance the user experience in a compliance context.
My design process with v0 evolved into distinct stages:
In-depth interviews with high-level executives, including directors of finance and P&L owners. These sessions provided valuable insights into the difficulties users encountered when managing ESG compliance. One customer, for instance, highlighted the need for a tool that could integrate with existing systems and provide accurate, up-to-date information on ESG regulations.
Crafting precise and effective text prompts to describe UI elements, interactions, and user flows. This required a shift from visual manipulation to a deeper understanding of v0's capabilities.
It reminded me of how I used to ‘design in browser’ many years ago. It brought back my dormant skills as a former coder when I worked in websites in the 00s. If a prompt wasn’t working I could edit the code directly. Since it was well structured code, this make it much easier.
Generating multiple UI options based on prompts, enabling rapid evaluation and iteration to put in front of customers to gain validation.
Continuously incorporating user feedback into prototypes and generating new versions, ensuring a user-centered design.
Handing over interactive prototypes and clean, well-documented code generated by v0, streamlining the development process. Added bonus is that the v0's generated code can also serve as a foundation for collaboration and refinement with engineers.
Uploading user stories, feedback, styles, and relevant data directly into v0 at the very beginning.
Using each new chat in a project as a separate design file to organize work. For example, instead of creating a Figma file called 'User Profile,' I would start a new chat in v0 and name it your overall project name such as 'User management'.
Creating standardized templates for describing components, ensuring consistency and clarity. To ensure a smooth and efficient handover to engineering, I focused on creating standalone, self-contained components. This approach minimized dependencies and made it easier for developers to integrate the generated code into the main application.
Component name: [Component name]
Type: [e.g., filter, list, button, form, navigation]
Location: [Where it sits on the page, relationship to other elements]
Functionality: [What the component does, user interactions]
Appearance: [Styling details, e.g., 'drop-down', 'multi-select', colors, size]
Content: [Data it displays, placeholder content]
Dependencies: These should reside in component folder
Location: Provide context. This component will be placed in the header of the website, next to the search bar.
Components should be stand alone: All dependencies such as styling and data should be included on page.
Instead of building the entire UI in a single v0 file, I created separate files for each component. For example, a filter component, a button component, or a data visualization component would each have its own dedicated file. This isolation ensured that each component's code was independent and manageable.
To further streamline the workflow, I linked each component file in v0 to its corresponding Jira story and ticket. This provided engineers with direct access to the component's design, code, and associated documentation within their familiar project management environment.
Of course there are other AI tools out there, so this will apply to any chat type tool.
Begin with broader prompts to establish the overall structure and layout of your design. Then, use more specific prompts to create individual components, referencing the existing structure. This iterative approach helps you gradually build up your design while maintaining clarity and control.
v0 retains the context of your previous prompts within a chat. Use this to your advantage by referencing previous responses and building upon them. You can also link to previous files to provide examples or context for v0.
I have used Figma visuals and even screenshots found online as starting points. Uploading these gives V0 a head start, so it can quickly understand the look and feel you're going for. This lets you jump right into fine-tuning details and interactions instead of starting from scratch.
When writing prompts, use clear and descriptive names for variables, classes, and IDs. This will make the generated code more readable and easier for you and your engineering team to understand.
Ask v0 to add comments to the generated code to explain the purpose of different sections. This improves the code quality.
Designing with V0 was a lot of fun, which is a big point in its favor. It was a refreshing change of pace, but will it fully replace Figma, Sketch, Penpot, and the like? While it has a lot to offer, it's not a complete replacement yet, and still has room for improvement. I will definitely be using it for prototyping and early stages of design process.
I can see the bigger picture here where these tools will incorporate all of design process and stages, to complete the big picture to feed directly in to the output which is a final design fully fledged from start to finish. A one stop shop where we plug in what we learn and the individual agents will do their thing.
So what happened at the end of it all? I'm very happy with the results of the pilot. I had to ramp up fast, and it took some time to get things fairly right—still not perfect, but I am refining my prompts and processes all the time. The collaboration with engineering was a good success, yet some annotations of files could be improved. This is something I'm working on.
The product is currently in Beta, undergoing testing and further validation with a select group of customers and unfortunately I'm unable to show visuals of the product. This Beta phase is crucial to ensure a successful launch and demonstrate how AI is impacting UX design for the good. Designing the new product with v0 was exciting and it challenged me to rethink traditional processes, and embrace AI as a co-pilot. The experience was incredibly rewarding and there is no looking back, and I will continue to refine this process.