CarahCast: Podcasts on Technology in the Public Sector

Health Transformation Powered by Data Intelligence with Collibra

Episode Summary

In this podcast, a panel consisting of Dr. Barry Chaiken, the Clinical Chief at Tableau Software, Charles Gabrial, the Project Manager of Standards & Interoperability at The Federal Electronic Health Record Modernization, Cupid Chan the BI & AI Committee Chair of LF Data & AI at the Linux Foundation, and Chris Cooper, the AVP of Health and Education at Collibra highlight the value of data intelligence within the healthcare industry.

Episode Transcription

Speaker 1: On behalf of Collibra and Carahsoft, we would like to welcome you to today's podcast, focused around health transformation powered by data intelligence where Dr. Barry Chaiken, the Clinical Chief at Tableau Software, Charles Gabrial, the Project Manager of Standards and Interoperability at the Federal Electronic Health Record Modernization, Cupid Chan, the AI and AI Committee Chair of LF Data and AI at the Linux Foundation, and Chris Cooper, the a VP of Health and Education at Collibra. We'll discuss how data intelligence has worked to power transformations within the healthcare industry.

Dr. Barry Chaiken: Welcome everyone to our panel. I am Dr. Barry Chaiken, Clinical Lead at Tableau. I am thrilled to have such learned colleagues with me today. I'm going to start with Dr. Gabrial. 

Dr. Charles Gabrial: Hi. I just want to start with a disclaimer as a federal government staff just want to for a second introduce the my organization, the Federal electronic health record modernization, dimension of the firm is primarily to implement a single common federal EHR to enhance patient care and provider effectiveness. Wherever the care is provided, I just want to share with you some of the scope of the firm, you can see about the firm, kind of have a 46,000 community partners, about 200,000 beneficiaries plus 150 provider, 9 million eligible beneficiaries, and they have 11 million visits. That's the scope of the firm which is involved do the VA and US Coast Guard. Thank you.

Dr. Barry Chaiken: Thank you, Dr. Gabrial, I know that you have done much work on artificial intelligence. Most of us who do not do not work in the field do not have a really good handle on what artificial intelligence is. Dr. Stephen Hawking warned us of the dangers of AI. And we have all seen too many science fiction movies that portrayed AI gone wrong, and destroying all of human mankind. Can you share with us your perception of AI and ground us in a true picture of what it is and what it is not?

Dr. Charles Gabrial: It's a very loaded question. And it's very smart question. I think that the reality now is that there is a huge gap between what he envision and the current state of the AI, there is definitely a learning curve to catch in this very lofty vision down the road. But I think what he is trying to say maybe at some point, when we have a smart city, when everything is based on automation and AI, as they intertwined together, maybe we need to make sure the management behind the scene will guarantee or ensure a quality outcome of the applying AI. But at this point, we have a long way to go until we reach this point. In the meantime, hopefully we can train our workforce, the new generation to manage such complexity.

Dr. Barry Chaiken: Welcome, Mr. Chan, you've spent much of your career working with big data and AI. What do you see is the future of AI?

Cupid Chan: Well, like I know AI sometimes may give a very like our story and amazing impression or like a will change the world. But you know what, in my experience and perspective, I think this is yet another technology. What I mean is like think about as ECS a bullpen right. When we first invented bullpen, it was a technology people really want to get one. And the technology really impacting human when it starts getting and that in our daily life. And think about internet is the same thing. When internet first happens in the world like invented in the world company like all the internet company, right the start. And then they use internet as their core technology. But this technology really getting impact when it's embedded to right now. We actually rely internet as part of our daily life. And I believe AI is exactly the same thing. So AI really can impact all of us as a human being when we start not talking about AI anymore. That's the first point and the second point I think it's is also very important thing about all the technology that we have invented no technology can be by themselves be standalone AI is the same so that's why in my LinkedIn profile you can see that I am an AI entertainer and block chain adopter so I call AI and block chain the yin yang of the modernize the technology AI is something ever changing if we have more and more data the probability we change is probabilistic technology but block chain on the other hand once you enter record a transaction into a ledger nothing can be can be changed from that point so it's like the yin and yang and the moment that really combine these kind of technologies together this is how we can leverage our truth technology and then like unleash the power of AI just by point of view.

Dr. Barry Chaiken: That's really very interesting. Do you see any obstacles to using AI fully?

Cupid Chan: Yes, of course. I think it's the mindset and the culture if you don't have a good mindset and always think that like this is kind of the technology will destroy the whole human being but just like think about when we invent engine right in the industrial revolution people may think oh that's not good because you're going to eliminate like for example the whole job category which may be true but on the other hand the opportunities that is bringing in by this engine is also tremendous so same thing for the AI I think like we need to be aware like AI is just yet another technology is an enabler for the true business that we are doing every day the real necessity that as a human being we need we're not change because of AI but it is an enabler that will help human to live better in the future.

Dr. Barry Chaiken: You know as a child I read a lot of science fiction a lot of Isaac Asimov it's really great to hear the two of you talk about the positive way about AI because I was very much worried about it and I trust you that it's going to bring some really great things into our lives even beyond what it's already done so thank you for that I want to switch to Mr. Cooper, I know you've done a lot of work in the field of data governance is data governance important when trying to build a data driven organization.

Chris Cooper: Thanks Dr. Chaiken. Great question and we really look at data governance as the practice of managing and organizing data and process to enable collaboration and compliant access to data governance allows us to use users to be creative and the value of data assets and how under constraints of security and privacy data can be effectively applied for AI for new insights for all of the capabilities that we all realize the data brings to our lives but what we find is that too much of the focus around data is around the data management technologies the key to becoming data driven whether that's through great analysis and visualization technologies which are important our platforms which are vast which allow us to manage these vast stores of information are absolutely critical but just like dropping me behind the wheel of a formula one racecar isn't going to capture any checkered flag flags we have to have the best technology aligned with the best process the best skills and the best resources data governance helps us unlock the potential of our people and our process and join that together with a data to make incredible breakthroughs we don't have time for every data user to spend hundreds and 1000s of hours to understand the technologies data governance provides us the guardrails to keep those rookie drivers our data users on track so that we can get the most of the amazing technologies that we're seeing whether those are AI based our cloud data platforms and the ability to bring together and drive out correlations across a data landscape that we haven't been able to see historically and Collibra we see data governance as the key to unlocking data intelligence when data intelligence is that connectivity of the right people insights process and algorithms to allow our data users what we call data citizens to optimize the process increase that efficiency really drive innovation so we connect the data the context are the meaning of the purpose the people the process often in the form of a policy back to the meaning and the outcome that we're looking for when we have that context around data meaning provenance reason for that initial capture we can really drive the insights that we're looking for from our data to transform our organizations.

Dr. Barry Chaiken: I want to follow up on one thing. You mentioned the data governance and you mentioned the data culture. What about data literacy can you expand little bit about what that is and what it means within the context of data governance and data culture?

Chris Cooper: Absolutely. You're exactly right, Dr. Chaiken, the key to unlocking the potential within our data is enabling that data literacy. We see data governance is the foundation for literacy, it is the book on the shelf, our literacy is our ability to read that book and understand that content. So as we enable our users to become more data literate, we use the tools of data governance to provide the context, the definition, the meaning, the provenance of where that information came from, to help our users become more litter, and understand appropriately, how they can apply those data insights.

Dr. Barry Chaiken: I have to tell you at my time at Tableau, I've obviously thought about data governance in a culture data literacy. But really what I've loved about your responses would talking about data literacy is being able to read the book. And I thought that was a really fantastic metaphor for what data literacy is. So thank you so much for sharing that. If it's okay with you, I'd like to steal it and use it when I need my own thoughts. That was really, really fantastic. So thank you. I want to switch on to Dr. Gabrial, again. I've heard about the advanced technology Academic Resource Center. Can you share with us what that center is and what it does? Additionally, what federal initiatives are currently underway there to improve AI and ML?

Dr. Charles Gabrial: There are two organizations do have to do several initiatives in promoting AI, machine learning and predictive analytics. The first one is ATARC, and we'll share the website with the audience. It's a chance and a channel for the industry for government individual to learn more about what's going on in acquisition in the federal government. And there are elite organizations participant in in this endeavor, OMB, Secret Service, God and several others. So this is the first one. Yeah, the second one American Council of technology. And again, we are going to share the websites. And they care about improving the acquisition process, the workforce, the training, even adding a job description, to the new coming workforce, and increasing the federal government knowledge and training around those areas. So we'll definitely share point of contact at the end of this discussion.

Dr. Barry Chaiken: Dr. Gabrial, can you share with me some examples of the work that the center is currently engaged in, and what you think the outcome of that work will be?

Dr. Charles Gabrial: So the ATARC are writing a paper for the OMB to go over all the knowledge domain, job descriptions, qualifications, possible training, so the federal government can take a proactive action on promoting the workforce. We are have a small community that we discuss how the leadership can manage the cycle of AI and make sure that the process will go very well. They also can accept out of the box thinking, as you can imagine, this will require a lot of different leadership in the federal government and organization involved. So there are very intensive discussion in the White House, the Congress, especially the White House, is taking a stake in this process. So there are many elite individuals from Harvard from further government are involved to shape the initiative. And again, on the second organization, they are American Council of technology. They are involved in shaping the acquisition process for AI, robotic process automation, machine learning, and so forth. So.

 

Dr. Barry Chaiken: Mr. Chan, you and I were talking the other day, and you mentioned the term cognitive intelligence. I've heard of business intelligence. But what is cognitive intelligence?

Cupid Chan: Things don't get shaken. You still remember what I told you that that's very good. But the lesson before I answer that, let me also share some like a past experience I had with Dr. Gabrial, because back in 2017, this is actually the very second panel that I and Dr. Gabrial, it's in back in 2017. We were in another panel cocido, the conference of healthcare, IT and analytics. And in that panel, the moderator asked me, hey, Cupid, what do you think about the analytics in the next few years out of the blue? I say, okay, maybe it's the AI plus bi equals ci. So at that point, I literally googled ci and cognitive intelligence and nothing there. So I can claim that icon that term myself, but it's up to like where the Google's algorithm is a precise or not. But what do I really mean by AI plus BI equals CI, Ai, we all know that machine learning. And an algorithm has a tremendous speed of learning something that no human being can ever at that speed of learning, you throw one terabyte of data to that algorithm after an hour, so you can find that that AI algorithm will find a pattern well for you, so that the speed AI can provide. On the other hand, BI, it's very well-known traditionally, in the past 20 something years, is very well known to help human to Intuit to expose or unleash their intuition of the direction of the analytics. So think about the speed by the AI or machine learning, and the direction that the BI tool can help a human being combined these two together, I call it as a cognitive intelligence. That's the real actual intelligence that we want we human being wants as the destination of the analytics. But that's not the end. And you may have realized that in the like marketing material, my title in this particular panel, I am the Chair of BI and AI committees in Linux Foundation, LF, AI and data. And one of the projects for this year, I am leading a project called Obaiic. So I started out as O-B-A-I-I-C, you may say, why what is Obaiic. So you must heard about ODBC, right? ODBC is the like middle layer between the BI tool and the database, so that any BI tool coming in, they don't need to learn about like an Oracle SQL Server, teradata or anything, they just need to learn ODBC that protocol, because at that time, 30 years ago, database technology is they so spread out. And we need a standard to make sure that when BI when user coming in, they can talk to the same language fast for 30 years today. You know, what is blooming, is AI is the framework. We have TensorFlow, we have cafe with Kerris, we have pytorch, we have a lot of AI, and they are all open source platform. If we want to talk or if I want to talk to the underlying the data frame API framework, effectively, we need something very similar to ODBC for database, but this time is open business and artificial intelligence connectivity between the BI platform and the AI layer, and Linux Foundation LF, AI, and data we are defining this layer this year, with the leading product, Tableau is one of them included, I'm talking to Tableau click and the other MicroStrategy, or the other BI leader in the market to define that. And hopefully, after we've defined and implement this layer, we can then freely talk between BI and AI, and really accomplish the CI, cognitive intelligence.

Dr. Barry Chaiken: I want to go back to data governance for a moment. And the reason I want to do that is I think that you can't do effective business intelligence or any types of analytics unless you have a strong data governance structure. And it also has to be flexible and agile so that you can respond to the needs of the organization, whether it's a payer, provider government, what have you, even outside of healthcare, it's still important to have that very important data governance model and also to harmonize etc. So I'm going to shift back to Mr. Cooper, can you share with us the key factors that organizations should focus on to build a high performing data governance model?

Chris Cooper: Absolutely. And the key to unlocking the value of data governance is to focus on your people. It's about thinking about the end in mind. What do you your users need to understand about your data, your process, the context and the meaning? How do we bring these together? And how do we assess how do we form a structure of collaboration, so that at the most simple layer, we can have a common understanding of the business concepts, the clinical concepts that we are speaking about, and how those relate and tie back to the data. It's incredibly important to think about how do we leverage technology to then scale that process. So that we are minimizing the time spent on wrote governance activities, like inventory and data sets and classification. We want to apply the knowledge that we have and the participation we have to bring together the people around the concepts and the meaning that ultimately, data governance is helping us connect back to our data and our AI models that we're trying to drive across the organizations.

Dr. Barry Chaiken: Mr. Chan, you shared with me the term Beauty and the Beast, while also linking it to the phrase drag and drop. Okay. I suspect this has absolutely nothing to do with the animated film or play on Broadway. Okay, so can you explain to me what you mean by those two phrases and linking them?

Cupid Chan: Sure, sure. I just answer you like about AI plus BI equal CI, right. But besides the speed, and the direction that is shared, like the analogy for AI and bi, and now the thing that I can think of, it's the beauty and the beast, thing about like, the beauty, the beauty first, right? In the past, again, 20, something 30 years, BI has been evolved a lot from just like a pie chart or line graph, very primitive kind of visualization to represent the data to right now, the user can be very engaged by different visualization and the layout of the data. So BI has been like an expert in beautifying the data and present it to the users. So that's the beauty part. But think about what is the beast, if you have just a pretty face sitting in at the top, but you do not have the engine in the back. That's nothing you really want to explore the data. And right now, I think like, we have already talked a lot about AI. What is that engine? What is that beast? Is the AI? There are a lot of like, open source framework. As I mentioned, TensorFlow, pytorch, they are that engine, they are that beast. But how can these beast be user friendly. So that's why I come up with a term Beauty and the Beast or this analytics, because I want people to understand you have the AI, which is good. But unless you can connect back to your users, so that they can understand what is that particular AI algorithm is doing is useless. And so that's the reason why last year, so I mentioned that this year, we do something called Obaiic. Right? The ODBC AI for ODBC. Right? But last year, we actually did a white paper, focusing on human centered AI, how are you going to be like make AI be more human, and put human in the center. And one point that I brought up in that white paper is not machine learning is machine teaching. And think about any successful school system, right? Of course, you go there to learn as a student. But on the other hand, you cannot forget about the teaching piece. And I think the machine teaching is also very important in the overall AI process. What do I mean by that? So if you think about the whole, like the AI process right now, starting from collecting the data, cleaning the data, trying to form the model, all the way to deploy your data to the production, all different steps, you will find out that those steps are pretty silo, pretty independent. And if you really want to put the human in the center, we want the human re to inject our knowledge back to the each different steps. For example, when we try to like a label the data, you can let the AI algorithm to actually run the first round. And if and only if that machine learning algorithm cannot really label that it will come to you as a human, then you will help the algorithm to categorize or to label that data set. And then at the end, when you have the result of that model, you're trying to predict the result, but the machine learning algorithm is actually not very certain, because the confidence in that is very low, then human should also come in to tell to provide more information for that like a result. And then you teach the machine and the machine will then learn from you. That is the iterative process that I can see, again, by Beauty and the Beast, I think we really need to design the whole process, not only focusing on the underlying AI engine, we also need to focus more on how can we convey this message and this result to the user and interact between each other.

Dr. Charles Gabrial: If I may comment on this on his statement. This is very, very intelligent. I like to a lot going back to the first question Dr. Chaiken when you asked me about that AI is going to take over the world and all the stuff so his point what he's saying is AI is not doesn't have the absolute power to dominate the human it has a certain focus certain niche and you're going to train the AI on a certain niche so if you're going to train the AI to clean certain space the AI is not going to grow and spread hurdles across the space that you asked it to do it's going to be focused on specific things so that's one factor the second factor is the interaction between the human and AI and the iteration that what makes the AI is valuable that's what data governance can come in interpretation of the AI data in our in our world in our evidence base analysis it has a big part of how would you interpret the data and what type of decision you're going to make and that will by itself to some degree is not going to make the AI dominant period unless we're going to have the smart city that I talked about in the beginning. 

Dr. Barry Chaiken: Mr. Chan, you mentioned the word beast right? Well when I think of the beast I'm thinking of the challenges we've had in healthcare around interoperability it's been a major challenge for a really long time worked by the former OMC head Don Rucker and his team in setting federal rules has done much to enhance interoperability and limit data blocking knowing mickey tripathi the new ONC head I’m very confident he will do a fantastic job building upon Dr. Rucker’s work. So Dr. Gabrial, considering the current state of interoperability, what types of obstacles still exists in preventing analytics and what do you expect the future to look like as interoperability improves?

Dr. Charles Gabrial: As you know the good news we had a good maturity in all levels technology process semantic and technical so interoperability requires all those ingredients if you will to be mature over time so it's kind of we came a long way so in terms of health of course interoperability in in the context of health we came a long way because it's easier in a different industry in different applications so definitely there are a lot of room for improvement but we came a long way we need to be make sure when we have a workflow on the electronic health records we can take this workflow and send it to a system somewhere else system from system a to system b we can tag this workflow maybe the system b would have the knowledge about if we just label this workflow enough or tag it enough the system b will ingest the workflow and understand that this is for a lab order or a particular episode if we send it and we are successfully descended system the next level is consistently take it inside and take it in within the system regardless of the system is similar or not can take it inside and translate that and ingest it and put it in a context and use it for something else that's a second layer of interoperability so we're still into the semantic layer we came a long way in the maybe in the technical in moving data around but ingesting and digesting there's still a lot more to come.

Dr. Barry Chaiken: Thank you, Mr. Chan. We know that EHR companies have some responsibility for the lack of interoperability. What do you think we need EHR vendors to do to open up access to the data within their EHRs and why do you think they have been reluctant today to accommodate the demands of their users?

Cupid Chan: Yeah I think that very good question but it's a good question but it's not a hard question if we think about hey Dr. Gabrial, I mentioned that like we know each other since 2017 right can you open up your house for me or hey Mr. Cooper right how about you can you open your house or maybe lend your car so that I can drive and Dr. Chaiken, same thing because what you owe the world you own as a house as a car is your asset is something that you own at least you believe you own it this is exactly the same thing for EHR many EHR system believe they should be the one owning the information that they collect from the hospital and from the patient and they use this asset to generate their income is nothing wrong but it is the fact that they treat data patient data hospital data as their own asset. And you may you may have heard about that there is a very famous analogy, someone's saying that data is a new oil, right? Oil is the asset, not only in a personal level, in a country level, every country wants more oil. Right. But I think I heard from Dr. Wong Chen, he has a more closer analogy that I believe what the data is, data is a fire, not oil. With Fire, it has to meaning fire, meaning that if I have a fire, I pass the fire to you, you will have my fire, and then you can keep passing on. That means if I share my data to you, you can easily make a copy and become your own data and become your own assets. That's number one. And number two, fire depending on how you use the fire, it can has the positive side, we all use fire to Coke, for example, the fire can also burn down the whole house, or even the whole forest. So I believe if we if you really think about that, what each hour is currently doing, we cannot blame them. But we need a better a newer and more modernized the way or even though revolutionize the way to think about HR. That's why like, starting from this year, I become unemployed or entrepreneur because I started a company exactly targeting for this problem. And my goal is to decentralize EHR. So that patient can own back their own data. And at certain point, they can even generate their own data. And we keep talking about Hey, patient data is a privacy problem, right. But if you think about us is not only patient data as a privacy problem, when an AI company or Machine Learning Company, send their model so that their model can be trained, right? They a model can could be compromised as well. So privacy is on one hand, if you think about data privacy is important. The model privacy that you protect how the model would not be still stolen by someone else is also very important. And I know this is not a commercial time. So I'm not going to go deeper into what this company is doing. But my vision is to protect both sides, the data producer and data consumer, right, the data producer here is the patient, but the data consumer is the model producer. And then at the end, once the model getting trained, that model can also be monetized. And maybe the model consumer to benefit from this model can be back to the patient, if we really take the real balance of both the data and also the model. So I think that's why like privacy AI is something I believe is also very important. And as the next step for like just AI and machine learning.

Dr. Barry Chaiken: I have to share with you this anecdote and I won't mention the actual person involved with it. But I've heard the anecdote of Joe Biden when he was vice president and was sitting next to the founder of an EHR company. And he was talking about how wonderful the 21st Century Cures Act would be in terms of discovering cures for cancer and where diseases and such, and it'd be great, we'll be able to take your data, meaning from the EHR company, put it in a database, or researchers could access it, we can do wonderful things in healthcare. And that founder turned to then vice president and said, what do you mean, your data, it's my data to use any way I want. And was very interesting, because my perception has always been and I imagine most people's perception is that the data belongs to the patient, first and foremost. And it is allowed to be used in their patient care by the various provider organizations or vendor organizations in ways that they have permission to use them based upon privacy rules and such. But I thought that anecdote was very insightful into thinking how some EHR vendors think about their relationship to the data that exists in the EHR. Another thing that you said is you talked about fire and when you talk about inter-operability, and you talk about fire, I think of DHL seven fire standards. So it was really interesting how you, you brought that into the conversation. Yeah, I'm going to switch a little bit and ask Mr. Cooper another question. What is the role of data governance play in protecting privacy in particular PHI, and what is the impact of expanding interoperability on data governance?

Chris Cooper: It's a great question and I think it ties back to exactly what we were discussing, or what Cupid was discussing, you know, we've seen that, you know, HHS is final interoperability rules released last year are really good step forward to this notion of the patient ownership of data, that least at the right, to access my data to understand how it's being exchanged. We're seeing that similar concept around privacy, really core to our consumer privacy protections that are arising in Europe around GDPR, or CCPA. That notion that it is the individual that has ownership or at least control or right to control how that data is used appropriately. So I'm really personally very, very excited about the direction we're headed as it relates to interoperability and privacy. And I think when we marry those two concepts of thinking about privacy and data ownership, at the individual level, if we think about how we can extend that notion of the individual, right, it's incredibly important to think about how governance brings that process together, often in the form of policy. So how do I start to apply policy appropriately to a set of classified data ship with a notion of people and ownership on top of that? So it's really again, that intersection of people process and data really being driven in a practical way, by where we're headed from an interoperability perspective that I think is challenging this old notion of data ownership, where that control resides. And I'm incredibly excited about what this means from a technology perspective, to help us unlock interoperability by placing that control in that transparency in the individuals hands, if I know where my data is going, if I know how it's being used to support my care, I'm far more likely to be supportive of that process, then concerned about how that data is being might be used against me in the future, in the form of, you know, higher insurance rates, or all of the ways that we can see data abused from a privacy perspective. So when we provide this transparency, it brings along that support for individuals to be more open to sharing of information. And it places that control where it should be back with the individual who actually has the long term benefit, the long term damage that can be suffered from a violation of inappropriate use of that data.

Dr. Barry Chaiken: Dr. Gabrial, what technologies do you think can improve interoperability?

Dr. Charles Gabrial: And how will they work? As I mentioned, technology mature enough, but interoperability especially is not about the technology in the forefront? It's mainly about just this analogy, it's mainly about, you have a, you have two kids, and maybe twin kids from the same parent. When kids grow up in a country and the second one grew in a country B. And then the two kids meet at some point, if they don't speak the same language, you didn't grow up in the same environment, you don't understand the culture, the workflow, the process, the semantics, even though the two kids from the same parents, they will never get to understand each other. Same with the healthcare interoperability, I think we need to have the way and build the way, the same way we have a smart highway or the same way we have some kind of a good banking system, we probably need to have this strong Health Network of translation, if you will, that will understand each other. I think that's where we're at, at this point.

Dr. Barry Chaiken: Mr. Cooper, do you have some suggestions you can share and how to get started on the road to good data governance in a world of increased interoperability?

Chris Cooper: Well, I think governance is very much that key to unlocking the challenge that Dr. Gable was just speaking about, as it relates to semantic understanding, and the ability to communicate. If we don't understand what things mean, within our organization, we certainly can't begin to interoperate and share that understanding outside of organizations. So governance, whether it's internal to an individual organization or across organizational boundaries, where interoperability becomes critical, we need to have the governance in place to not only define the semantics, but to define and understand the care process the care workflow, again, as Dr. Gabrial was saying, so many of these challenges I think that are core to the vision of interoperability that I believe the industry is dreaming. You know, really can be unlocked by thinking about how do we align semantically, how do we align from an information organization and a workflow process. So again, really understanding the people, how we define the concepts within that, as well as the overall process that occurs throughout care delivery, is, in my view, the key to unlocking interoperability are the remaining challenges that interoperability still faces, so that we can have that transparency. And again, tying that back to a common understanding of the privacy associated with that, and how that relates back to the individual themselves. So having that strong agreement, and the ability to define those concepts and tie them back into our policy and our data is the key to interoperability.

Dr. Barry Chaiken: Thank you, Mr. Chan. I know what an Iron Chef is, perhaps having watched too many Bobby Flay videos on the Food Network channel that said, you share with me that we should look to be an AI run chef in an AI kitchen, a video you produced. What do you mean by that?

Cupid Chan: I actually, small minor correction is a Iron Chef in data kitchen. You know that like the Iron Chef show? I love it, just like you do. Dr. Chaiken need. It's one of my favorite show, because I love eating. And believe it or not, before I got my two kids, I and my wife were actually drive to New York, just for food in Chinatown, and drive all the way back one day trip just for the food. So that's why you know that I love food. So when I get into the AI industry, and this technology, I am thinking like, it is actually rare, very much like what we are cooking, because I have a story. So let's say one day to chef had that has an argument, right? One said, hey, my dish is the greatest dish in this world because I have the freshest ingredients. And then the other chef disagree and say, No, no, no, I actually have the like, most secret recipe will make my dish the best, or at least better than you, you know that the two chef argue of something that is both cannot win, right? Both fresh ingredient. And secret recipe of how you cook a dish is very important to come up with a great dish, right? Think about AI is the same. In AI, data is the ingredient. You cannot throw some like garbage data and expand the algorithm it will help you and give you like a model can predict everything. That doesn't make sense because it relies on the data. But at the same time, the algorithm is the recipe, how do you really coke that like a dish to the point that it can be consumed by people by users? Right? So these both are very important to cooperate together and we cannot like a loss any one of them. So that's a risk that actually ankles back to our topic throughout the whole panel, right? Data governance, like where is the data, and is this data trustable. This is the ingredient it provides the ingredient to cook a good dish. But on the other hand, you also you still need like some stamps to cook that data. Otherwise, it will be just a waste of a very fresh and highly curated data. But you don't have a proper way to cook it, it's still not good for us to consume. So that's why I decided to create a producer video called a Iron Chef, and basically is to use this analogy to walk through the seven different steps of like how to do from data collection all the way to the prediction of this AI algorithm.

Dr. Barry Chaiken: There has been much talk about social determinants of health. This is especially true due to the health disparities seen during the pandemic, in communities of color. Dr. Gabrial, can you share with us some of what researchers are using to do work on social determinants of health in your work?

Dr. Charles Gabrial: Yes, so of course, my organization, the firm or the federal electronic health modernization, entity is investing in social determinants of health at age seven, the standards of how do we represent the transportation insecurity, food insecurity, some other factor like stress, and so forth and so on, but social determinants of health. I see it plays a big, big role in the current crisis and future crisis as well. As you can imagine, first of all, you can use social determinants of health to understand the behavior of the community, the behavior of society, before any crisis hits the community, based on their lifestyle, their health status and disparities, you can predict pretty much the severity of the crisis. And accordingly, you can craft the intervention and craft the information campaign to protect the society either at least, if you're not going to prevent it, you can reduce the severity of the crisis on the end of the journey is also you can social use social determinants of health to plan the logistic of the vaccination plan, the logistic of the continuing interventions, and for future health or future kind of mitigation, you can use social determinants of health for future prevention, to reduce the severity of, you know, future crisis, as much as possible. So that's the high level, speaking how you can use social determinants of health to impact the crisis back.

Dr. Barry Chaiken: Mr. Cooper, I want to go back to the data governance. So how does good data governance, facilitate research and collaboration?

Chris Cooper: Well, as we talked about it, it establishes that common understanding that addresses that semantic unknown, that Dr. Gabrial had spoken to you, one of the foundations that we need to drive good research is a commonality and a common understanding of data that's available to support that research process. It's also important as keeping, we're saying, looking at how do we understand the quality of our ingredients that are feeding our research process. So being able to understand the freshness of that data, the completeness of that record, especially as we're starting to bring in much more diffuse datasets like social determinants of data that are not well quantified, and not necessarily well understood, we need to have a common understanding of what that means for some of these new determinants that we're looking at, such as proximity to a food desert, how does that impact the research that I'm driving? What is my definition of a food desert, and does that cross across the population of data sets that I'm working with drive my research, and ultimately, that collaboration needs to expand beyond any one institution, agency or organization, we need to have a strong understanding of the concepts that we're working with, as well as how we are facilitating and driving that research. Ultimately, we also see that governance is critical to access and supporting that collaboration, as Kuban indicated, we have to have trust in our research partners. If I'm going to allow you to use my data, drive my car, to do your research, I need trust that is going to be returned safely, that you're going to use it in a responsible manner. And then ultimately, it will continue to be accessible for my use as well. And I won't have my valuable asset my data damaged or impacted that would ultimately harm my research and the effort that I've invested to support that. So we see governance both on the aligning our understanding of the data that's being used, the challenges that we're facing, as well as reaching an agreement on how that data can be used across it and extended research collaboration, both from a trust and an appropriate access perspective. Those are ways that we see governance really playing a key role as we start to establish broad sharing and collaboration with organizations, we need to have that framework that allows us to support the process and the people that ultimately the data the research will drive.

Dr. Barry Chaiken: Dr. Gabrial, you share with me that you think lack of information made the pandemic worse in the US. What do you mean by that?

Dr. Charles Gabrial: The first one that comes to everyone mind is that, you know, we didn't know that there is a something outside of the US is in terms of the spread of the pandemic. So that's one the first lack of information. The second part of the information that we lacked is okay, the crisis came and the crisis hit all the our society, how much resources and how much spread? Do we have to mitigate the spread? That's again, lack of lack of information. The third layer of lacking information of course, we didn't use social determinants of health to maybe predict to some degree early enough to some degree that we have certain area that can be prone of Spreading the virus over others. I think hopefully that that's a lesson learned for us in the next cycle that will address all those to the different layers and prevent the severity of next crisis if it happens.

Dr. Barry Chaiken: You know, we've heard such really great content today from all of our speakers. But I want to bring out two things that I want to I walked away with that I thought were really important. The concept of learning to read implies data literacy, I thought that was a really great metaphor. And the also the idea that in order to have produce a good meal, in terms of analytics, you need to have good ingredients. And a good ingredient is having the right data. And when you want to do AI and machine learning, you also have to have good data. And I think if we walk away with any two concepts today, I think those are really wonderful ones to walk away with. Again, I want to thank Dr. Gabrial. Mr. Chan, and Mr. Cooper for their time today. I want to thank the audience for spending the time with us today. Everyone, really have a great rest of your day. Thank you so much.

Speaker 1: Thanks for listening. If you'd like more information on how Carahsoft or Collibra can assist your organization, please visit www.carahsoft.com or email us at collibra@carahsoft.com. Thanks again for listening and have a great day.