All right. Welcome to lecture three, Data equity in collection and sourcing. Our goal here is to identify elements of data equity in collection and sourcing. To do that, we're first going to review the readings, highlight some key takeaways, and then apply data equity surrounding data collection to research processes and the steps to improve health equity. Apply data equity knowledge to the knowledge to action framework. This knowledge to action framework might be new to some of you, but I think it really demonstrates how many different places data equity plays a part in moving throughout an implementation, a scientific implementation model. Then we're going to review some case studies. First a review of readings. I want to go over the readings just a little bit. Conceptually, the more the numbers document is a guide toward diversity, equity inclusion, and data collection. Big picture, what the authors are trying to do is offer some standardization and some guidance for how to collect data through the lens of DEI. This is a problem and a question that people have been struggling with for a long time. The field of public health has been addressing DI questions through methodologies of epidemiology, health services research, policy, and advocacy, and more. The question is, how do we collect data in a way that promotes equity? If not promotes equity, it doesn't perpetuate stereotypes and cause more harm. I encourage you to look at the working definitions throughout this document. For example, on page 11, there is a working definition of the phrase marginalized groups. The issues of health equity have long been talked about, long been published about. But in terms of data equity, especially when it comes to big data, it's a relatively new field of empiric study. It's not new in terms of its thought processes and its questions and its advocacy, but it's new in terms of being an element of course content or being an element of what academics publish on. Because of this newness, there are some differences in the definitions of terminologies, and there's some differences in how different groups talk about key concepts. I thought this document did a nice job of recording and documenting some working definitions for us to work from. Because throughout the field, there are arguments and there's a lack of consensus and the definition of key terms. I like the disclaimer that they included in this document because it actually really highlights this point. What they say is, we recognize that we do not represent the views and opinions of all of the communities discussed in this document and acknowledge that many organizations are already doing meaningful and challenging work to make our communities more inclusive and equitable. We spoke with and solicited feedback from numerous individuals to ensure that this guide is as useful and practical as possible and incorporates the voices of communities that have been previously reduced or silenced or erased. We also value and believe in continued learning and recognize that this guide might not be perfect. They're saying it's a start. They've narrowed the applicability of this document to program survey, but they acknowledge that it may not be pertinent or may not be as useful of a tool for precise research around different health conditions and that it may not apply for those researchers that are working in demography. I want to start our highlight by looking at page 11 and the questions for consideration. What they're really driving at here is making goal based decisions in survey development. So what information are you collecting and for what purpose? It's best practice to not have questions built into the surveys that don't serve the intended purpose and don't answer the question at hand for the larger study or for the larger survey inquiry. In bioethics, in conversations surrounding research, we're always trying to balance the participant burden with the value of data collection. We shouldn't ask questions that we don't intend to use and there aren't helpful to answering the overall research question or the overall point of the survey. When I was in grad school, one of my cohort members had a practice survey that we were workshopping, and there was a guest lecturer with us that day. And that guest lecturer really drove home the point that in reviewing this survey and that asking about race in the survey really wasn't necessary. And the cohort member that I graduated with was, well, this is a requirement of the N NIH. And the guest lecture was, Okay, but why? You're looking at processes of care. You're looking at a health conditions that impacts race and ethnic groups in the same way. Why are you collecting data about race? It really highlighted to me that in research, we need to tailor our approach to the research question and not just think in terms of cookie cutter. Well, this is what the NIH asks for. This is what I always see in a table one. Don't forget that Table one is the table that's used to describe the study population. We very routinely very often see race and ethnicity as one of the variables that's reported in table one. Table one, you'll see sex, maybe they say gender, maybe they say sex. You'll see race ethnicity. Maybe they put non white Hispanic, exactly what the variable names are. Are not consistent and that's part of the underlying problem. Maybe you see insurance status, region. It's the table that you use to report the findings of the study specific to describing the study population. This experience with this guest lecture made me really think differently about what we do because it's practice versus what we do because it's valuable for our research question. What does my table one need to look like for my study, not well, what is every other table one report? I want to highlight another example from a different point of view. I participated in a multi year heart failure study as a healthy control. I wasn't involved in the research team, I wasn't an investigator. I was just a subject of the study. But I had a special eye for research, that's what I was going into, and I knew that's what I was going to do with my life. I was really keeping an eye on the investigators and on the nuances of the study, even though I was just a subject. The study involved guided, prescribed, exercise, a particular diet, and then multiple blood draws and to my memory, some survey questions. The hardest part for me was the diet because I've always been very plant based and I don't like meat. However, when I was asked questions about that, I could tell that the investigator went off script out of curiosity and engaged with me in a discussion about nutrition that was totally outside the research the research question in a way that made me feel really uncomfortable. I wanted to answer the questions. I wanted to be a part of the study. One, the study was really interesting. Two, it was really well paid, and I was a graduate student. I wanted to be involved. But I didn't want to discuss my longstanding views with a relative stranger and have a health discussion with a relative stranger, especially under the premise that it was being recorded, and it could have been used for research purposes. It probably wasn't going to be used for research purposes because it had absolutely nothing to do with the study question. But I was being asked questions outside of the purpose of the study, and that in and of itself is a problem. Let me give you another example of data collection gone wrong per the guidance in this document. This was my own study. This was one that I had designed and was conducting. I was conducting focus groups in a Colorado nursing home. Before the focus groups with patients began. So I think I interviewed directors of nursing and I had a few staff focus groups, but this particular focus group was of the individuals that lived there and were receiving care there. So the patients. Before that focus group, I asked them to fill out a short survey. I was asking about age, asking about the family members that were involved in the decision making for types of care for care setting. But one question I included that absolutely wasn't worth the patient burden was asking whether the participants were part of long term care or part of skilled nursing care. So one thing I did right was that it was a checkbox, it was low burden. But one thing I didn't do right was including terms like skilled nursing care and long term care that are really the terms of the researchers, of the clinicians, of the policy makers. People who are involved in insurance definitely care whether a patient is long term care or skilled nursing care because it affects payment. It affects the level of care you're receiving. It affects the length of stay that you can anticipate for that patient. But there are terms. They're not terms that patients use to discuss living in a nursing home. So when that was put on the survey, and the participants before the focus group even got started, were asked to check whether it was long term care or short term care. There was a lot of questions. In order to answer those questions, I needed to say things like, well, skilled nursing care is a higher level of care. You can imagine the follow up questions to that, Well, I need the best care. Why am I long term care and not receiving skilled care is my care unskilled? It just went into a really uncomfortable territory that should have been one discussed in private, and two discussed within a clinical conversation, a direct patient care conversation, not within a research conversation. So the participants not knowing the difference between long term care or skilled nursing care and the conversation that transpired trying to answer questions about what the difference was was not fun and set a terrible tone for the focus group that I was trying to conduct. I just wanted to highlight this section of the reading and then contextualize this reading in a bigger picture. First, we're looking at a very, very simplified process regarding a database study. We first collect the data, then we analyze the data, then we draw conclusions from what we collected. Perhaps we identify a need, we identify a gap in care, in order to address that need, in order to fill that gap in care, we change policy, we change practice, and ultimately we measure health outcomes to know whether our changes improved health equity. But it's a two way street. What we learned in terms of where the gaps are, where the needs are, who is disproportionately affected, who wasn't included in the policy change, who was discriminated against in the practice change is the lens through which we need to look in order to restart the process and re collect data. So change data collection process based on what we know. That's why it's a loop or a two way street. The point I'm trying to make in this visual is that once you have the information on your outcomes, surrounding health equity, that information needs to be used to change your data collection process. We need to start from the beginning now through a more refined and more evidence based lens in our data collection process. So what didn't we need? What did we need about the information I'm sorry, what information do we need about people who maybe weren't included or whose voices were more marginalized. Which groups got disproportionately impacted? What needs to be refined about the policy change. So it really is an inerative process. Adding to the complexity. We're now looking at the knowledge to action model and thinking about how each one of these points throughout the knowledge to action model is impacted or needs to consider data equity for appropriate execution to improve health equity. The knowledge to action model, I'm not sure if you've come across this in your studies yet. It's an implementation science model and it can be used at the individual level, but it can also be used at the population or systems level. At the individual level, it describes the steps that need to happen in order to study and examine how individuals move from acquiring new knowledge to changing their own health behavior. An example would be someone who has heart failure becomes aware from their clinician that daily weights is a non burdensome, straightforward, non invasive way of tracking heart failure progression and monitoring how well the therapeutics for their heart failure is working. When your heart is not working well, you have more swelling associated with heart failure can be deadly. One of the ways that Individuals can monitor what's going on with them surrounding this issue is by taking a wait every day and looking for sudden changes and looking for patterns. So in order to make that action, in order to make that health behavior change, the individual first needs to know and understand why this action is valuable for their own health, and that step requires knowledge inquiry, knowledge synthesis, and then perhaps producing a knowledge tool, like let's say a schedule or let's say a summary or a tailored messaging of understanding. They've identified a problem and they've sustained, let's say knowledge use. They're using the daily weights to monitor their health failure. Perhaps they're evaluating outcomes. I know this information, I'm applying this information and because of that, I'm able to answer my doctor's questions more thoroughly. Because of that, I'm able to help my clinical team, tailor my therapeutics for better health outcomes. Perhaps the clinical team is monitoring knowledge use. Following up with the patient to see whether daily weights are going well, whether they're being recorded. That's at the individual level. At the systems level, the knowledge to action model is used to examine how problems are first identified using knowledge inquiry, knowledge synthesis, and knowledge tools and products. And then we work our way around the knowledge to action model to ultimately inerively improve both the knowledge surrounding the problem and the methodologies, the knowledge tools, the adaptation to local context, Inert improving the situation such that we can continue to see health behavior outcomes that benefit the whole population or benefit particular populations. You can see that These steps would require data and that principles of data equity could impact how those steps are executed and whether we ultimately contribute to improvement in health equity or ultimately contribute to continued stereotyping or not addressing marginalized individuals or continuing to further marginalized different groups. The one that I'm going to pick on is the monitor knowledge use. If we are interested in whether a population has adequately been intervened upon to improve, let's say, smoking behavior or vaving behavior. We want to know whether our let's say awareness campaign surrounding the public health harms of vaping has been used by a particular population. We want to monitor knowledge use. What data are we going to use to monitor knowledge use? Perhaps we're going to collect some data via survey or via interview. Perhaps we're going to look at hits on a social media page to see if people are referencing a particular knowledge product or a knowledge tool, and we can use that as a proxy variable for monitoring knowledge use. Continuing to apply health I'm sorry, data equity principles at this stage and at others can help make sure that we are not propagating harm and that we're inclusive of groups that may be historically marginalized. Now that we've reviewed some highlights from the reading. We've looked at concepts of data equity within the context of a couple of frameworks, and then we're now moving into a case study. I thought this one was really interesting. Data deaggregation with American Indians Alaska Native population data. This case study dives deeply into what has happened in the absence of federal guidelines and standards for reporting and serving and collecting data when you're reporting out American Indian or Alaska Native populations. So This population is not represented well in surveys, and or are grouped together with another ethnic or racial groups. It disrupts multiple steps of research and the knowledge to action model. Thinking about multiple steps of research, if at the point of data collection, you have, let's say check boxes for your participants to check off. But American Indian Akan Native is either not included or grouped in with another group, what is that going to do to your analysis? Well, it means that the outcomes for that group is going to be very fuzzy in your analysis. What does that mean for your policy? Well, we're not sure what it means for this group. Well we're not sure whether the policy supports this group or is pertinent for this group or disproportionately affects this group because we don't know what conclusions we can specifically draw for this group because at the point of data collection, they were aggregated or missed in our survey design. It disrupts these multiple steps and interferes with our goal to improve health equity. One of the solutions that this study brought forward was the use of survey weights. We can mathematically write some marginalized wrongs mathematically. Rather than grouping American Indians and Alaskan Natives with other racial ethnic groups simply because the number of those individuals is small. Let's say, in a particular survey population. We can instead use survey weights to mathematically increase the sample size in order to draw conclusions without this aggregation issue that has had negative outcomes in terms of data equity. I thought this was a really interesting article. I'll put it up as optional reading. Lastly, just to wrap up, I wanted to point out that there are a lot of future careers in data equity. This DEI has been in the news a lot and there's some questions about the future, but there are going to continue to be institutions, organizations looking at good data collection. Data equity and data collection is good data collections. So I encourage you to take a look at the organizations that helped put forth the reading document for this module. You can see that they took information from people from multiple organizations. If you're interested in future work in this field, it might be worth looking at those organizations

Module 3 Lecture

From Cynthia Morrow December 1st, 2024  

2 plays 0 comments
 Add a comment