What to expect when you’re expecting… your designs to go live!
Muneesh Kapoor /
I remember my first project at ZEUX. It was a UX design project for securities arm of one of the leading banks of India. As a frontend developer, my role was to convert the UI screens designs into HTMLs which would then be used by the client’s development team. I worked closely with the design team to deliver around 60 fully responsive HTMLs in 3 months. I was proud of the end-product as it involved showing a lot of complex data in a simplified manner. We received rave reviews from the client as well because we had delivered everything that they might need; like the style guide, sketch files, assets and of course HTMLs.
A couple of years later, out of the blue, I learn that the revamp website was live. I was excited because this was my first project at ZEUX to go live. However, when I landed on the website, I was a little disappointed. Even though the developed website looked great, I could notice a lot of discrepancies between my HTMLs and the final product. The same was true for the design team.
Could these discrepancies have been avoided? The short answer is, yes…. if only the design team had been given an opportunity to review the final product before it went live.
Why frontend review by designers?
In the two most commonly followed software development models ie. Waterfall and Agile, the process works like a conveyor belt where the designs are passed from the design team to the development team. Post the handover, the design team is generally out of the picture and the development team relies on the mockups to implement the design. This could force the development team to take a lot of design decisions throughout the process which might be detrimental for the end-product.
In order to avoid such a situation, we need to make this a collaborative process where the designer can review the end product before the testers get their hands on it. The objective is simple. The design team, who have worked on these screens for days (if not months), will be able to find the discrepancies which can be easily missed by the development and testing team.
Validation vs. Verification
We all have heard these two terms and used them in our general vocabulary. But are we using them correctly? And what is the difference between the two?
https://www.meme-arsenal.com/en/create/meme/4289084
Verification is the process of checking that a software achieves its goal without any bugs. It is the process to ensure whether the product that is developed is right or not. It verifies whether the developed product fulfils the requirements that we have.
Validation is the process of checking whether the software product is up to the mark. It is the process of checking whether what we are developing is the right product. it is validation of actual and expected product.
In laymen’s terms:
Verification: Are we building the product right?
Validation: Are we building the right product?
Still confused? Let’s see the difference between the two.
Verification | Validation |
---|---|
It includes checking documents, design, codes and programs. | It includes testing and validating the actual product. |
Verification is the static testing. | Validation is the dynamic testing. |
It does not include the execution of the code. | It includes the execution of the code. |
Methods used in verification are reviews, walkthroughs, inspections and desk-checking. | Methods used in validation are Black Box Testing, White Box Testing and non-functional testing. |
It checks whether the software conforms to specifications or not. | It checks whether the software meets the requirements and expectations of a customer or not. |
Quality assurance team does verification. | Validation is executed on software code with the help of testing team. |
It comes before validation. | It comes after verification. |
It consists of checking of documents/files and is performed by human. | It consists of execution of program and is performed by computer. |
In most development projects, we have a team of testers who are validating the result rather than verifying the screens. Hence, it’s the responsibility of the designer to verify whether the end-result conforms to the designs shared and isn’t breaking anywhere. As it’s their design, they can notice the minor errors like font size, weight, colour, etc. If something is off, a designer will be able to catch it quickly since they know what to look for in the first place. This also frees up the engineers to focus their feedback within code reviews towards the way things are built, instead of the way things look.
Testing Process
First and foremost, we need to understand that a design review is not code review. The designer is not expected to read through 1000s of lines of code and share feedback. Rather, it should be their responsibility to catch issues pertaining to colour use, font-sizing, and spacing that may stray from the mock design.
Secondly, designers should not chase pixel-perfection. Personally, I don’t think pixel perfection is actually possible. Each browser will render the same piece of code differently. So, the goal needs to pixel-pretty-close. The developer can’t guarantee consistency across all devices, but they can ensure that the experience is consistent and in-line with the design. That’s the most important thing.
https://www.joshwcomeau.com/css/pixel-perfection/
As mentioned above, the design team will be verifying the design which is static testing. Static testing generally involves:
1. Inspections
2. Reviews
3. Walkthroughs
4. Desk-checking
In today’s world, digital content is consumed through every channel possible; whether it’s a desktop, tablet, mobile, smartwatch, or even personal assistants. The software developed could be for any of these channels. Hence, it’s important to test the software on the channel where it will be consumed.
Cross-device Testing
Cross-device testing is a software testing technique that checks solutions on various devices. This provides confidence in their quality and accessibility no matter how a user chooses to interact.
In an ideal world we would like to test it on machines with the same configuration as the end-user. However, due to a limitation of time and resources, the testing needs to be more strategic. We need to be aware of the channels where the content is going to be consumed. There are many tools that can be used to gauge which configurations are most popular. For example, Statcounter gives up-to-date statistics of market share for OS, browser usage and even what screen size is being used to view pages.
Test on actual devices whenever possible. Emulators are a great place to start but there are things that you just can’t find on emulators and simulators, and that’s one of the biggest reasons that people use them in conjunction with real devices.
Cross-browser Testing
Cross-browser testing is important and often ignored aspect of testing phase. Browser vendors follow Open Web Standards, but they have their own interpretations of it. There is a huge amount of competition in the browser market and therefore each browser tries to differentiate themselves. Due to this, extra features are added or refined on top of the HTML standard meaning that each browser potentially deals with the same HTML in a different way. Since they each render HTML, CSS, and JavaScript in unique ways, it is important check the interface on different browsers or different versions of a single browser.
As per Statcounter, the most popular browsers of 2022 worldwide are:
1. Chrome
2. Safari
3. Edge
4. Firefox
5. Samsung Internet
6. Opera
Testing Checklist
I have already covered the design review best practices in an earlier article. It’s time to elaborate more on the testing checklist and documenting the errors found during the process.
Ideally, each designer should create their own checklist and keep iterating on it from time-to-time basis their experience of previous projects. The checklist can be divided into 4 pillars of Design: Navigation, Presentation, Content, and Interaction.
Here are some of the component properties which should be part of the checklist:
• Grid system and alignment
• Colors
• Fonts and texts
• Links and navigation
• Images / Icons
• Forms and buttons
• Responsive Web Design
During the review process, it is important to document your findings as well. It could be as simple as using Powerpoint or structured like using Jira to assign tasks and comments. A change management process also helps in tracking all the changes found during the testing process. However, the more important thing is to document everything and use image and video screenshots wherever possible as there is high probability of the point getting lost in translation.
Learnings for the next Project
A design review also gives the designer an opportunity to look at their designs with a fresh perspective. As they interact with their design, they can understand their shortcomings which can be applied on their next project.
Communication
The design team and development team might have a different way of interpreting the same set of requirements. Designers and developers both focus on different aspects of the same part and therefore see things from different angles and perspectives. Hence, it is important to communicate and understand the other team’s point of view. Communicating early and communicating often is very important. Ask a lot of questions and talk through every requirement, page, component, feature, interaction, animation, anything — and take notes. If things are unclear, ask for clarification.
A weekly stand-up is a great way to touch base periodically. The development team can give a walkthrough of the developed code and ask for clarification. This will help in catching the errors early and clearing any conflict between the two teams.
Handover
The design team gets an opportunity to revisit the handover document and see whether they had handed over all the assets that was required by the development team or were some screens like error states and 404 page missing. Were all the assets downloadable or did the development team face difficulties accessing the images, icons, fonts, etc.? Was the development team able to bring the design team’s imagination to life or would a prototype help? These learnings can then be applied on future projects to make the handover seamless.
In my opinion, rather than just handing over the designs in an email, a style guide and design system walk-through would really help. In the walk-through, the design team can highlight all the templates and components used throughout the design. This can help dispel any ambiguity and get the two teams on the same page.
TL:DR
When it comes to any development project, quality is everything. In most projects, there is a team of QA testers who conduct comprehensive testing where they are basically verifying the output based on some technical documents. However, designers should break free from the classical roles and responsibilities of QA and conduct their own testing process. They have been involved since the start of the project and it’s their ideas and design that have come to fruition.
References
- https://uxdesign.cc/creating-space-for-design-reviews-bd23a14f307d
- https://www.smashingmagazine.com/2019/05/frontend-developers-designers/
- https://www.geeksforgeeks.org/software-engineering-verification-and-validation/
- https://www.geeksforgeeks.org/differences-between-verification-and-validation/
- https://www.guru99.com/verification-v-s-validation-in-a-software-testing.html
- https://www.boxuk.com/insight/cross-device-testing-why-and-how-you-should-do-it/
- https://uxplanet.org/the-design-review-between-designers-and-developers-75152868f717
- https://www.joshwcomeau.com/css/pixel-perfection/