That sounds good. Okay, let's stop recording. Hello, everyone. Thanks for joining this webinar This webinar series has been organized by the IAA Task Force and power system resilience metrics and evaluation methods. Today's topic will be presented by Dr. Svetlana Ikshiva. The topic is resilience metrics development at newark Just before giving the floor to Svetlana, give a very quick introduction. So Svetlana is a principal data science advisor at the North American Electric Reliability Corporation, or NERIC. She is responsible for developing statistical models and performing statistical analysis. Of databases supported by NERC. Her expertise includes 15 years of teaching and conducting research at universities in Russia and the US. Is Atlanta earned her PhD in probability and statistics from sant Pittsburgh State University in Russia in 1997. Her research area, folks, is in the area of risk assessment, reliability, and resilience analysis. A power system. Thanks, Atlanta, for agreeing to give this talk and we're looking forward to learn from you about what you have developed for resilience metrics. And the floor is yours. Thank you. Thank you, Mohamed, for the introduction and thank you all for joining the call And I hope that some of the topic some of the things i am going uh to discuss today very closely uh they are very closely related to The report technical report we are developing now as a team And also, you will find many familiar slides, figures and concepts that were presented a couple of weeks ago by Ian Dobson. And Ian is just also a contributor to this presentation, as well as Micah Walker, Christopher Claypool. Here is an outline of my talk. I know people know about Newark mostly, but still I will provide a very short introduction. I will talk about our mission, our annual state of reliability report. And a very important data source, transmission availability data system that Newark supports. And the rest of my talk will be review and analysis of large transmission outage events. On North American about power system and we will talk about events caused by extreme weather. I will provide some recent statistics and some of the year statistics about these events. And we will talk about autos and restore processes and resilience metrics. That we can define and analyze using those processes And then we will um we will move to kind of a more general discussion about how we can track changes in resilience And then I'll provide some conclusions and talk about next steps. So my organization, North American Electric Reliability Corporation, is a part of reliability organization enterprise it is comprised of Newark, ERO comprised of new york And the six regional entities and their footprints you see here on the map And so here are they all together are uh Represent in your jurisdiction and it is 48 US states and canadian provinces And a very small part of a Mexican territory right here across the border from California. And it's where our data comes from and it's where our reliability standards are in effect. So our mission is to assure the effective and efficient reduction of risks. To the reliability and security of the grid. And as a part of this mission. New York develops and publishes its annual state of reliability report And you see here a title page of the most recent one state of reliability report. As we speak we are getting data from 2024 to start developing this year report And the goal is annually inform regulators, policy makers and the industry about most significant reliability risks and actions. Taken and we are going to take to address those risks. So here is a link to the most recent report And to evaluate the state of reliability, we define and track multiple multiple metrics and draw conclusions about the state of reliability based on performance with respect to those metrics. And we will talk about resilience metrics later. And here I just wanted to show you a figure 1.1 from last year report. Very, very high level statistics on inventory and performance And you see here what we oversee and what are our standard govern. And in particular, relevant to our talk today is more than a half of million miles of transmission lines. This is a transmission level lines in North America these voltages above 100 kilowatts it's of what It's what actually reported in our tasks, transmission availability data system So all registered US and Canadian utilities that have BPS valve power system level transmission equipment Report their inventory and outage data to touch. So we have a mandatory reported information data about outages And what is, I think, of interest and relevant to this group? Each outage area card contains information that can be useful for our purposes for resilience analysis start and then time of the outage company which is the best way to identify geolocation. Right now, Amutic mode for all type and outage causes both initiate and sustain. And here is a link to Todd's DRI. Data reporting instruction with all the details about what is reported. So now we use tax data we use this information about Otis to identify a large transmission outage events. And how we do it is it's something that Ian mentioned in his talk two weeks ago. So we apply automatic procedure at automatic algorithm that joins outages based on their start and end time. So we say that outages are uh Joined in a single event if they start and close succession and uh also overlapping time So here is a picture that this is x-axis and you see okay fast accumulation of outages. And they all overlap. So for those who just joined into events, we can look at their outage causes so we can see if a particular event is weather related. For resilience analysis, we focus on so-called large events we define them as events with 20 or more outages. And I will mention it later, but maybe it's a good time to say here. These events are only uh only consists only join us More than 1% of all our teachers are sent us. It's a point 2% of outages that are a part of one large event or another. But these are actually what we call high impact, low probability events, right? So it's what relevant to reliability studies. Now if we want to identify a large event with a particular extreme weather event, we need to rely on external sources of weather data because that's does not have any details about what happened, what kind of weather caused those outages. So we rely on NOAA data, velocity suite data ventusky, also all kind of public reports news uh And we compare our records with occurrences of extreme weather events, natural disasters. And in all cases, if it is really a catastrophic event, we find I don't identify event that, I mean a natural disaster that caused this large event. And… So we use both automatic procedure to find large events and then kind of manual review to verify and confirm those events and here is the latest most recent available uh data information about large transmission events, whether related from 20 to 23. So large events are listed in chronological order And it dwell for large events and one ate three you see here the day they started. And their size and size is the number of outages automatic outages in the event And then interperson action affected. And this column shows uh type of extreme weather that caused this large event and some statistics about impact on the transmission system. So I don't want to spend much time now on this statistic this is a table from state of reliability report because i'm going to talk about these metrics later. What I wanted to mention here. Event highlighted in blue where it was a large civilian 18183 many of you may remember a canadian wildfires and this one was caused by wildfires in queue. And it was 180 transmission outages and you can see a huge MVA impact in terms of transmission capacity because uh almost all transmission elements, transmission lines were from the highest voltage classes And this is also not very typical event because it was a composite event i'm If you have time, I will discuss it in the end of my talk But the second largest event, kind of a typical event was uh winter storm, tornado event in the eastern interconnection with 119 automatic outages and You see here that it was this large event was caused by A mix of different type of extreme weather and it is actually what we see more and more often. We see extreme weather with a huge footprint and one type of extreme weather kind of or next to each other or followed by each other and So it causes not majority but many, many large events uh in uh in uh in the recent years. So if you want to identify a predominant extreme weather type, you need to look at all the outages and footprints And we find uh we find most important cause and extreme weather type. And we put this event in this bucket. Because we want to analyze different types of extreme weather and their differences and similarities. So here is nine year statistics for all weather-related large events. And here is breakdown by extreme weather type. So there were 101 large event And the largest group of those large outage events Well, it was caused by thunderstorm wind And the second largest UC was caused by winter weather, then by hurricanes, tornadoes, and the smallest group caused by wildfires. What about other large events, not weather related? They are very rare. Only over nine years we only had two in north america again events with 20 outages and more And one of them from 2017 was caused by incorrect field modification and RAS operation. And I have a link here to New York Lesson Learned. And the second important one is 23. It was a contamination event. Event caused by Borton salt contamination Both of those are not weather related events, are much shorter yeah and much smaller than kind of overall weather related events. So because we don't see a lot of distraction or damage uh during this event unlike destruction and damage caused by sometimes caused by extreme weather. So now when we have this confirmed list of large events. And do you know what type of extreme weather caused them? How we start analyzing them. So to start our resilience analysis, we define for each event we define three processes and three cures that illustrate those processes. And here is an actual land. It is last year, 2023 winter storm we went. And axis is a time. So orange cure for og is an outage process. And at any time during this event, it shows a cumulative number of outages by time t so you see here here even starts out which is just accumulate accumulate and then stop accumulating and cure stay flat until the end of this year. Known URL which is happening. Now our next process and cure phase, restore cure of rt shown in green, it shows a cumulative number of restores by time t And of course, this cure is always below outage right outage cure because we cannot have more restores than outages And when a store cure and outage curve meet, it means the last outage is restored and this event ends. And finally, the last process we define from auto-tition restore processes is a performance process or performance curve It is a difference between RT and OT. And it is a negative number of transmission elements out at any given time cheap. So we start with a zero, go to a negative area and eventually return to zero and events. Tops ants so this is an This is a life uh sorry this is a real life uh i think uh illustration of resilience triangle or resilience trapezoid concept right it is what we actually see during during a large transmission now to Shivan. And I think I wanted to repeat what Yan mentioned a couple of weeks ago that unlike conceptual uh triangle or trapezoid If you don't see here like degradation phase followed by a recovery phase because actually outage And the restore processes, they overlap. They going on together. So now from this set of processes and cures for each event we can calculate a lot of different metrics a lot of different resilience matrix and some of them i list here And then I will talk about each of those metrics and we'll see What they look like, what analysis we perform. And now I just want to comment how they define event size is obvious number of outages or transmission capacity If we weighed outages with MBE value, we'll talk about transmission capacity impact Outage process duration it's a time when outages accumulate, outage rate time to just restore restore rate is estimated from restore cure Total element days lost or total NVA days lost this is a total losses These are total losses calculated from performance curve Is an area between performance cures and time access. And if it is performance cure for Elements, transmission elements you will have eliminates loss and if it is mva performance score we will calculate and get the scores And this area, of course, is equal to area between outage and restore curves. Even Eurasia And now this is a method that measures recovery or restoration, but we also define other restoration metric, which is a time to substantial restoration level which is time when 95% of elements or 95% of transmission capacity are restored. So now let's move and review each of these metrics and and see how they reflect how they say something about resilience And actually all of them are also function depend on extreme weather, right? So let's start with the worm size. And the number of outages or total MBA affected when we look at outage curve it is It is here, right? That point out this process attains in the end Here this event of size i think 21 of course it depends on extreme weather right of the type on magnitude, on footprint but it is also reflective of a preparedness of transmission system Effectiveness of carding in the efforts, ability of system to withstand the extreme event. So here's some information about our nine-year data set of large events, average event size is 52 elements median 32 elements. Huge outlier. Hurricane Irma. The 352 outages event from 2017 and here is a path of this hurricane but these are outages only in north america So it was our largest, absolutely the largest event with a huge impact But overall, we see that hurricanes was the largest events of transmission system Statistically significantly larger event size here is the summary of statistical tests, Duncan Gruten, and all other types of extreme weather you see here the average sizes And they are no significant, statistically significant differences between those groups. So other interesting statistics is a distinct number of distinct elements affected by an event. And it is not always actually it's rarely equal to event size Because we very often see multiple outages of the same element on during the event. So, and they are very significant very strongly cariologic And a number of distinct elements is about 87% of the event size. This is on Irish. Now let's look at outage process and two metrics defined from this process, outage duration and outage rate. What we found outage duration is right here. Right here time when our teachers continue to accumulate and we found that this is is essentially determined by duration of extreme weather. Even if the duration of outage process is 11 hours, very fast way short compared to event duration. Media and even smaller below six hours. And we're interested in how this 2023 Quebec wildfires when Because outage process lasted 20 two days and it was a duration of those wildfires And so this is a duration of natural disaster. How did you read? What we found, this is a frequency at which RV is secure and we found that outage stars are very well approximated by Poisson process with a constant rate. Of course, for a given event. And here you see the actual event right and you see it's almost kind of perfectly linear. And what is interesting for different event size uh sorry for different extreme weather times This outage rate, they are statistically singular. Antonavirus is about seven outages per hour. So it's how fast outages are secure. And… And next, let's move to the store process and see a set of metrics maybe more closely related to resilience right time to first restore is a time between when outages start and when the restores start. And of course, it is a reflection of preparedness ability to recover and reduce duration of an extreme event but also it depends on weather type of course and on the conditions on the ground How will they allow or how will they allow Where do they allow to start a restoration process? So but it's remarkable how short this time is. It is below one hour. Average 53 minutes to start restoration. And even if we remove monetary outages, it will be 55 minutes. And longest time to start restoration we observe for tornado events. And for hurricanes, even hurricanes being largest event it is shorter than average. And we can understand why, right? Hurricanes are usually very well in advance. There is a forecast And utilities are prepared unlike tornado, which are unpredictable and you know it takes longer to start recovery and also tornado can cause a lot of damage and destruction. Our player here is winter storm elliot that started in December 2022. It took four hours to start a restoration. For this event. Most of the people are familiar with Elliot as generation event, but it did cause a large transmission event as well. So now we're interested in statistics restore rate. You already might notice that restore process or restore qrs is by no mean linear. And you see here an example of a restore cure for event of size 60, I think. Of course, it is a step function right because uh the measurement time and minutes but uh we see that start kind of fast at fast rate and then slows down And of course, restore rate reflects both preparedness of the system, ability to recover. And the ability to reduce impact and duration, but of course also depends on extreme weather. So what we found that restores are very well approximated in many cases by Poisson process but not with a constant rate but uh but um a rate lambda t which is proportional to look normal PDF. Via the parameters from UN Sigma which are event specific, of course. So what we see that occurrences of restore, that it falls down. So restore rate decreases. So what we also can calculate so when we have this approximation This smooth cure fish is very well approximates restore cure. We can calculate restore rate as a gradient. Of these smooth cure. Or we can calculate derivative and estimate restore rate at any time. So here is a formula. Here are mu and sigma for parameters of this log normal distribution Anne is the size of the event z is the number of restores at time when restore process starts. So here is the formula that give us a good approximation of historic rate. And here I show some statistics by chart. Let's first look at orange bars. So we look at restore rate when half of elements are restored. So for this event, it would be 30 elements right when uh restore qrs process reaches 6.30. So what you do choose a cup of sixty. Here is we can calculate for this time we can calculate restore rate and then I average it for uh by um over all events with a given extreme weather type. So you can see that when half of our tissues are stored, what kind of restore rate is absorbed or estimated for each for each type of extreme weather. And it's already kind of very slow right and but we can notice the fastest is for thunderstorm IV and winter weather events even through restore rate depends on m. So you see here are still these type of events they kind of have the faster restoration Also, we can estimate a maximum restore rate as an inflection point of these move and estimate what was the maximum rate during this war. And blue bar shows average maximum restore rate for a given type of extreme weather And also, we can calculate time to attain this maximum restore rate. I think it's interesting statistic too, right? It takes longer for hurricane events Followed by winter weather but it is still below on average six hours. Then then we have that then faster rate of restoration for all large events. Next was degraded state lists now move to performance pure. Nadir of performance cure shows the most degraded state during this event. And it is a number of filaments out. A maximum number of remains out or maximum number of NVA out maximum uh transmission capacity out. Again, it depends on how tissue restore rates, of course, but so it is a function of both weather and resilience of the system. And I show here averages my years over all large events. About 32 elements. And interestingly that this strategic is a very highly a very highly strongly correlated with the lymph size And it is a good predictor of a web size or event size is a good predictor of Nadir. Nadir Equal on average about 56% of the AOM size. So sometimes we introduce an observation uh element nadir and mv nagir they can they they might not coincide inside in and um but in most of the cases, they do. What is interesting and i think important uh important information about resilient. Overall system does not stay in this Niger state or more degraded state for a long time. Average time at Nadir is 28 minutes. So then the first restore keyword i mean the next restore a keyword and And system continues to recover. And here i show again results of Duncan group PNR that shows most degraded states by extreme weather type. Total losses, areas. Between the four months curve and time axis. And of course, it represents the kind of overall impact to transmission system very important. Very important metric depends on both event size and duration of course as well as uh as well as I will teach and restore rates So I show here some statistics, multi-year statistics What is interesting i already mentioned that large events contain only 0.2% of Tulsa outages over nine years. But they are total accumulated losses comprise more than 7% of overall task losses. So it is evidence that during large events systema is under stress and actually every shortage duration of outages during large events is significantly is significantly higher than for all the other events. Finally, you switch to some statistics that should describe recovery or restoration. Here or restoration duration So more straightforward is event duration, time between fuel storage and last restore This is a highly variable metric. It closely follows log normal distribution and it has a very heavy tail. Look at standard deviation. Mean value is over all nine year events, 400 at the morning hours and standard deviation uh you know more than uh two times that So we think it's not a good meal trip. To measure recovery nausea because it's high statistical variability but also it is a poor reflection of the system recovery Because what we see here uh These graphs here, they show typical event we see here for a long time one or two outages remain unrestored and this is a condensed One week duration, nothing happens. So what we see that longest remaining outage is not critical to reliability and they couple of hours one outage may continue long after all customer loss was restored. So it's not a representation of actual recovery of the system. What we'd prefer to use instead is so-called substantial restoration duration, time when 95% of outages or in the restore. And for this event, this time as shown here, here is a time when 113 outages are restored. For event of size 119, And this time on average things stays just above half of total van duration Eurasia. So 53% on average of event duration it takes to restore 95% of elements affected by this event. Again, these metrics also variable, also abnormally distributed, but standard deviation is much smaller and So we think it is a preferable metric to account for recovery because not just a smaller statistical variability But we see that 95% restoration level is often representative of the time when utility can actually re-energize distribution system. And start showing them customer. So now… Just a little bit of general discussion. I spent my a lot of time talking about this metric. So what we can do now is you can identify large events. We can calculate a lot of metrics for them and we can to have changes in those metrics from one year to another or we can establish trends but how he move how where is that this Jump from changes in matrix to uh drawing. Are you reliable conclusions about changes in resilience? When we can say resiliency is improving. It's very difficult and it's a very difficult uh I think task to accomplish because uh when you look at those large events no two events are the same I mean a natural disasters, right? Even if all We calculate. Improved in the next year. We need to look at actual weather that cause those events to draw conclusions about improvement in resilience. So we get therefore one study looking at very localized and very detailed weather data and outage data in Florida. So we work looking at passes of all hurricanes over several years we look at multiple weather data category pass duration wind gas precipitation everything that was available to us and outages and there and restores To conclude that resilience against hurricanes and tropical storms in Florida actually improved. So here is an old equation to this group. Please provide your thoughts and feedback on my mail. Do you think that we can actually reach conclusions about improvement or decline in resilience or a stable resilience just looking at you know uh a time to complete restoration or restore changes or you know the metrics that we just reviewed. And final thoughts um so um what we talked about today what we reviewed today Newark can identify large transmission outage events about par system and north america we can calculate a set of Reliability metrics for them. And what we found that all except two events Large events were caused by extreme weather. We found that there are statistically significant differences in resilience matrix. Between different extreme weather types. For example, we found that hurricanes cause largest and longest events So all of those metrics can measure different abilities of a resilient power system And the combination of those abilities. And absolutely all metrics also depend on extreme weather. So again, my question, my ask to the group is out of the maybe dozen of metrics we reviewed, if we need to choose couple of methods three methods which are the most important, informative and relevant to measure resilience, what they would be. So please again respond with your thoughts or ideas. Our conclusion is that many different types of data are needed to grow reliable inference about resilience improvement or decline. Not just outage data, it should be weather data it should be information about utilities activities in terms of preparedness, hardening efforts and so forth. So these are some finances. I don't think I have time to talk about Quebec wildfires, Mohamed so maybe we'll I'll end here. You will see this and description and graphs in a slide deck that Mohammed will share with you. And this concludes my presentation. Thank you very much. Thank you, Svetlana. This is interesting and very informative presentation. Just quick announcement you have not just sort of put your attendance, please do so. There's a link in the chat Please, I think that link will ask you to provide your name and email address just to include that in the The attendance sheet we have one or two questions for now. The first question is. I think from Samra, the question is what components are mostly affected by the extreme weather events and transmission infrastructure. So, uh. Transmission lines transmission lines but if we just look at inventory transmission inventory that is which is reportable to toss Transmit ACCUR cards are… kind of majority of elements reported. And they are mostly affected by large events as well. But I should mention the element types we consider, we collect information about in tasks are ACC shortcuts, GC shortcuts. Transformers and back-to-back converters. So these are four types of transmission equipment we collect information about. Thank you. I have a follow-up question. This is Samrat again. Show are those are the effects across those components uh similar uh across the weather events or like they are different with like different events right so so mostly affected events on the um pulled away that maybe becomes the transmission line due to icing, right? Such type of things. I think of that. For example, here. All of these outages were a case shortcut and this is shortcut outages. No equipment was actually and the outages were very short. It's just wildfires lasted for a long time. No transformers were affected. Because it was not actually kind of built out equipment. It was mostly a contamination, what caused a majority of those outages. There was not a lot of damage and destruction. This is some transformer outages for hurricane events. Very rarely for wind events, right? Unless it is like associated with hurricane can cause a real damage or destruction for transformer equipment. So yeah there are some differences in composition of outages. For different extreme weather types. Yeah, thank you. Thanks, Samraj, and thanks to Atlanta. There's another question from Craig. How are these metrics planned to be used for evaluating investment? And improving resilience. And Craig, if you want to elaborate on this one, please feel free to do so. Yeah, yeah. I tried to kind of navigate a discussion from metrics to state of resilience right so we can calculate a lot and say okay from here From here, one to year two We saw a faster time to feel restored, right? Shorter time to feels restored, faster restorey. Less losses and higher nagers. You know and foster uh faster time to reach a substantial restoration or even a total full restoration. But again. What were our disaster, what are their natural disaster from year one to year two, right? And only if we see that it was a the first year it was hurricane category one and second hurricane of category for a much larger footprint and wind gusts and everything sewers level. But all the resilience metrics improve you can say okay the resilience you know improved as well otherwise it is no true events are the same So here is uh it seems that kind of zoomed analysis to characteristics of those events Natural disaster in necessary to factor into changes in resilience metrics to draw conclusions about resilience itself. Thanks, Latina. I think you had a question at the end. We will go to that one. I think that'd be interesting. Point for discussion. But before going, so I don't see any questions, but I have a few questions here I think, I mean, the presentation that you nicely describe it, the difference could be between 95% of number of restarted components or 95% of the capacity being restored. Did you see any difference in the time of restoration of using different, either the capacity or or number, right? Do you see any difference between the time of restoration or similar. Yeah, it's a very uh it's a very A very interesting question because of course we are when we started calculating arismatically We want you to see the difference, right, if any. And first, we thought, okay, maybe companies, utilities will try to restore a higher capacity elements first. But when we saw all the types of you know um correspondence right elements restore faster than 95% capacity. Yep. So it actually depends, I think, on who want this damage and what his destruction is you know so that's sometimes you cannot go and restore maybe higher or more critical piece of equipment or element just because you know for example even a flooding in the area or destruction or You know damage so um but in many cases we do see that 95 transmission capacity are restored faster and it's a good idea to look at maybe each event in particular, we did not do it so far. Thank you very much. Also, there's an interesting number that's 50 lines in one of the outliers. Being restored in one hour. So I'm assuming here is that re-energizing and fixing could be always counted as restoration. I mean, we do not distinguish between, I mean, the data do not. Distinguish between either It re-energized equipment or line or transformer and being fixed in fact, because, you know, some components get i mean damage it, right? And some people just reorganize it. Does the data that you have includes Both or distinguish in between. I am afraid to give a wrong answer. I know that that's DRI gives a a precise definition right of what is the end time of the outage I will check and just check sand duel and please share this definition i i'm i just don't don't want to speculate and you know mislead you Sorry about that. Okay. No problem. No, thank you. That's just one to know because it's a large number so if it's We're starting 50 lines in one hour. That's been very, very effective. Yes. Processes yeah i think uh also have You want to… elaborate on this one, I think, because some comment here Yes, of course. In my view, the point is that by using simple statistics, one year from another year or things like that is not possible to have an evaluation of The increase of resilience The problem is that the resiliency is strictly related to heal, prevent high impact, low probability events. And that means we have a set or a very small set of extreme event to make it possible to construct the statistics of the different situation. In order to evaluate the situation, we need to consider 10 year or 20 year of events at least. And we need to consider the correlation among the weather event and the fault events on the network i think that this is a very important things And then the only possibility in my view at least, is to evaluate the evolution in a long term of the situation of resilience of the network. It's not possible to evaluate only single event or the event in one year to evaluate the increase of resilience from past and current year and future years. And this, I think, it's very important that Dimitri we identify consider the extreme event by using the extreme event distribution generalized extreme event, JF distribution, something like that there is a lot of techniques to evaluate the Jeff distribution, differentiate distribution and so on and so on. And I think that the only possibility to have Well, wait, the situation is to use these techniques that require long-term set of events to evaluate the situation. And this is very important. Then, of course, in the short term, we need to evaluate, to assess the situation because we have a current event and we need to evaluate how to increase the resilience for this specific event. In order to do that, we need to evaluate the possible outcome to forecast the possible situation and then we needed to evaluate how to address this problem. But this is the different matter with respect to the valuation of increase of resilience. I think that in some sense we need to consider long term to evaluate, to assess the resilience by using statistic techniques or analytical techniques. We need to use analytical in the short and medium term in order to address this to the possible extreme critical event and to increase the resilience in the short term to have a better management of the situation in the short term. In Italy, the regulation authority we we consulted the the regulation authority in italy use a long-term indicator in order to evaluate the situation and in particular we use standard standard reliability indicators such as expected energy not served, loss of load expectation and so on In this allow to evaluate the increase of resiliency year by year, but of course to evaluate, for instance, the expected energy.sward, we need at least 10 ER event. Just because we are considering extreme low probability events and then we need a set of event sufficiently large set to event in order to evaluate the resilience of the network. I think that this is very important. We need to distinguish the aspect of assessment of resilience by using statistical techniques and this will require long-term set to event material and false event to evaluate the situation and we needed to apply Analytical techniques in order to forecast the possible situation. And to identify methods, mechanism useful to increase the resilience for the specific event that can occur. Thank you. Thanks, Mel. This is interesting. I think maybe it's a good idea to have another tag that you can provide if you're available. As we discussed before, probably that's a good point to start there, right? Yeah, yeah. Thank you. Thank you, Manuel. I appreciate your feedback. I think you might need a good one. Okay, of course I can discuss this idea in the future talk. Thank you so much. Thank you. Thank you. And I think, yeah, so Ayan provided a link that link to Craig's question about the evaluating the investments And resilience. So I think all of you see this link. You can use that one. We will share this link as well after this meeting for those who want to Supported. Okay. Okay. I'm going to harm me just a second. Wait until I resend you the latest version. I made some edits last minute edits okay Thank you. Yeah, absolutely. Yeah, so you can do that yeah Yeah, so there are another question. I think you raised a question, Svetlana towards the end that which metric that we should use. This is one of the hard questions for this task force And I wish we have more time to discuss that. But again, the floor is also open for those who have participated or maybe can watch this video later Please, if you have any suggestions, reach out to Svetlana or to me or anyone that's Discuss what could be the best collection of metrics Yeah. Yeah, thank you. Thank you. Yes. Thanks, Vidlana. Thanks, everyone, for joining this doc. And it was very interesting and And informative. Thank you so much. Thank you. Bye-bye. Thanks.

Dr. Svetlana Ekisheva - Resilience Metrics Development at NERC

From Mohammed Ben Idris March 5th, 2025  

3 plays 0 comments
 Add a comment