If you are like me, a physician, or a nurse practitioner interested in technology and eager to participate in shaping the future of healthcare, you have probably heard a thing or two about telemedicine.

I decided to become involved with Dialogue, as Chief Medical Officer, to help design the patient experience, operational model, and software around telemedicine practice. I chose to be part of this adventure while maintaining my practice at the CHUM (Centre Hospitalier de l’Université de Montréal), mainly to have a voice in these tools’ purpose and to understand how we can leverage technology to improve patient experience and care. Dialogue is considered the gold standard in telemedicine providers due to its safety and quality of care, while also being patient-centric and always striving to improve the patient experience. Let’s face it: telemedicine will soon be just another means to reach patients, one available to every clinician.

Looking back two years, I realized that I had many questions revolving around safety, regulation, liability, continuity of care, sustainability, and more. Being able to see patients remotely is surely a great thing to complement your medical practice and improve access, but it can be tricky and a bit overwhelming at first. 

If you are contemplating diving into telemedicine, either through a private third party or within your practice, here are some key elements you should evaluate before moving ahead.

Scope of Practice

Similar to a hospital, a virtual clinic has to provide a safe technical and operational environment for the patients and the providers. That goes beyond a simple video consultation platform. 

Most important is the scope of practice. As a physician, you should have only one standard of practice. You have to be in a position where you feel you can safely deliver excellent care. You don’t want to be prescribing tests or treatments because you can’t properly evaluate a patient as this would be very ineffective and counterproductive to the goal of using telemedicine as a way to provide safe and efficient care. As in any other clinical environment, we should avoid overtesting and overprescribing at all costs.

How? By ensuring that there are processes in place that prevent, as much as possible, having cases that are not suitable for telemedicine from reaching you. You will find telemedicine work opportunities where doctors and nurse practitioners end up triaging all patients seeking to consult on those platforms. Please, in a healthcare system where resources are limited, use your valuable time for things that require your skills.

For example, you shouldn’t try evaluating patients with otalgia (ear pain) if you can’t look in their ear. You can’t assess a patient with laboured breathing without a stethoscope. You can’t palpate an abdomen remotely (yet). Yes, there is definite progress in terms of accessories to auscultate and examine a patient remotely. You should research the ones used by the virtual clinic to find out whether they are approved for use where the patient is located during the examination.

There is plenty of literature and use cases to define the scope of safe telemedicine practice: minor bruises, skin problems, mental health, sexual health, sore throat, and more. We don’t need to start assessing crushing chest pain in a 60-year-old smoker over a video call; those patients are better treated in the ER!

Autonomy of Practice

Third-party providers operate mainly on two models: direct to consumer or as a benefit to employees paid by the employer. It is imperative that you maintain complete professional autonomy of practice and should never accept an operational model that prevents you from doing so. For example, rules that prevent you from writing a note for a medically-indicated leave of absence to a patient-employee because of a contractual agreement between the telemedicine platform provider and the employer (client) cannot be allowed to regulate how you practice. Inversely, rules that require you to write disability reports for work injuries when you can’t do a physical examination of that injury is against the standard of practice. In other words, you shouldn’t work for a platform that places the interests of others ahead of your duty to the patient.

The same conflict of interest principles apply to patient referrals for services and products: the patient must be free to decide where they will go for their lab tests and which pharmacy they will use to fill their prescription.


Chat, Video and Telephone Consultations

Virtual consultation is an umbrella term that includes any means of remote evaluation. Most of the time, these are chat, SMS (synchronous or asynchronous), video conference, and telephone.

Video consultations for non-urgent simple cases have been evaluated in studies and in a recent review of an Ontario Telemedicine Network (OTN) pilot project. They uniformly report that video consultations are safe and appreciated by the patient and providers. They are also equivalent in terms of diagnostic accuracy.

Chat, on the other hand, should not be used as a primary means of communication for the evaluation of new-onset problem in a patient you neither know nor follow. You are taking a needless chance if you agree to work on a virtual care platform that uses this method of communication to evaluate new illnesses. The research literature shows that video is the safest means of initial assessment, followed by telephone.

Chat and SMS are also poor tools for evaluating a new mental or physical condition, because they blind you to the critical information you can gain from visual inspection. That means they put you and your patient at risk.

Finally, while Chat and SMS are efficient and safe ways to manage certain patient follow-ups, the messages must still pass through a secure platform with the highest standard of login authentication in order to avoid privacy breaches.


One aspect I cherish the most about my involvement with Dialogue is the fact that they own their technology; it makes it much easier to improve the product when you are sitting next to the team of engineers. I genuinely think this is the key to safety and innovation.

Dialogue has built a platform that is useful to providers and easy for patients to use. Healthcare is shifting from a sterile experience to a patient-centric approach, where the overall journey has to be as smooth as possible. With a robust data strategy, we can streamline care paths with artificial intelligence (AI) and automation, providing recommendations to the providers and helping the patient along the way. Achieving these results is much more tedious and challenging if the telemedicine provider doesn’t own the technology.

The same goes for privacy and safety: we are sure that the Canadian patient’s data is stored in Canada, and that every effort is put into making the platform as secure as possible. Make sure to verify who owns and maintains the tech you will be using and ask about compliance with data and security standards.


Safe Use of Automation and AI

Automation can fix many inefficiencies and AI, when judiciously utilized, can unlock better care and safety while enhancing the patient experience.

But we have to be cautious about how we implement these tools. In healthcare, where mistakes can have devastating consequences, I strongly advise keeping a human in the loop. That means that when providers automate workflows, with or without AI, they should make sure that it’s ultimately a healthcare professional who makes the final decision and communicates guidance to the patient.

For example, some platforms will automatically send patients messages similar to this one when the chief complaint seems out of the scope of practice: ''Our doctor has reviewed your request and recommends that you visit a walk-in clinic or family practice''. This simple sentence raises many questions.

First, is it accurate? If you are the physician that rejected the patient's request, did you really triage or evaluate the patient? Is the recommendation correct and, more importantly, sufficiently detailed? Did anybody else on the care team talk to the patient? Are you comfortable with the idea that by working on the platform at the time that the system generated the message that your name and reputation are backing the recommendation? Are you liable if something regrettable happens to the patient to whom the system provided no details about what to look for and within what specific time frame? You should ask questions about the use of automation and AI, how it is incorporated in the clinical workflows, and how it could impact you and the patient.

In the next blog post, my colleague Dr. Mark Dermer, Chief Privacy Officer at Dialogue and Medical Director, will take a close look at other essential aspects to consider when choosing a telemedicine provider.


Interested in learning more about working at Dialogue? Click here.

Dr. Julien Martel

Dr. Julien Martel is a general practitioner at the CHUM ER, promoting evidence-based medicine, medical simulation, and education. Dr. Martel develops artificial intelligence tools and is focused on the leverage of emerging technologies to improve patient access, safety and quality of care.