Watson ER

The team for this project composed of a multi-talented group of Interaction Design students, including Eduardo Franco, Saloni Dandavate, Wenqiao Deng, and myself. Our goal was to imagine what the emergency room of the future could look like by understanding the needs being left unmet today.

Invision Prototype

Process Book


We conducted research with nurses, doctors, healthcare professionals, regular visitors, and healthcare designers. We learned a lot about the industry, it’s legality, the bureaucracy involved, the emotions associated with it, and much more. We decided to focus our project around emergency rooms, to try to make it work better in the complicated world that is the healthcare industry.


Our greatest takeaways included a lack of feedback for waiting time, the difference in acuity levels,  the extended waiting time for patients who are not in critical status, and the inability of hospitals to turn away any patient until a doctor's diagnosis.


The Space

Our  main idea was to use AI in a way to help nurses perform their job, instead of replacing them. We found that this solution could speed up the process, giving more tools for the nurses and doctors, and at the same time, providing a better service for patients, by reducing waiting times and giving them proper feedback through the ER experience.


We decided to create a space with open counters in the place of the usual front desk. This way, the nurse (standing behind the counters) can greet every new patient and decide if they are facing a life-threatening situation (and need to be seen by a doctor immediately) or if they should interact with Dr. Watson (the AI agent). The nurse can also easily navigate through the area, and help any patient needing some kind of assistance.



Prior to the  final testing (in class), we did some preliminary testings with volunteers. With this, we were able to identify some  flaws in our original designs. They were mainly minor details but helped to make the overall experience easier and more efficient.

The final testing was a great opportunity to demonstrate and observe the patient’s journey through our experience. Having the users test the screens in a more high-fidelity physical prototype of the environment was really helpful. We could also demonstrate how  the nurses would interact with the patients at different levels (at  first greeting and when the AI agent signals an emergency - by a direct message to the nurse's device, as well as by a visual light signal at the counter ).


We broke the screens down into three use cases based on acuity levels. The first, appendix problems, was something that may seem like low priority, but the AI would be able to triage as high priority. The second, something that may seem like a high priority, heart problems, was diagnosed as anxiety by the AI doctor. The system would show the user that an urgent care would be a shorter wait time, without denying care. This was a challenge to design, as hospitals may not deny service to any patients that walk into an emergency room. The third, neck pain, was determined to be a low priority case.


We also included screens from the ER nurse's point of view, so they would be able to assist patients that the AI would be unable to. In the case where a patient was determined to have a high priority, the nurse would receive and urgent notification to bring them immediately into the ER.

About Me