TOM ADAMSON: Hello everyone, and welcome to another episode of Eyes on Earth, a podcast produced at the USGS EROS Center. Our podcast focuses on our ever changing planet and on the people here at EROS and across the globe who use remote sensing to monitor and study the health of Earth. My name is Tom Adamson. NLCD, the National Land Cover Database, is a land cover product produced at EROS, and it's widely used for land cover and change research in the U.S. EROS is also responsible for making sure it's as accurate as it can be. In her work on the reference and validation team for Annual NLCD at EROS, land remote sensing scientist Jo Horton samples thousands of 30-meter Landsat pixels to identify the land cover class those pixels should be. Why do this? To provide a reference dataset for NLCD users so they know about the accuracy of each of the NLCD land cover classes. In this episode, we'll learn a lot more about how that works and even take a close look at a sample pixel to see how the team validates these pixels. Jo, will you go ahead and introduce yourself? JO HORTON: Certainly. My name is Jo Horton. I am a land remote sensing scientist at the USGS EROS Center, working as a KBR contractor, and I am currently the technical lead for the reference and validation team for the Annual NLCD product. ADAMSON: You mentioned Annual NLCD. That's a new release that just came out October '24. But first of all, tell us really quick about NLCD. What is the National Land Cover Database? HORTON: That's actually a land cover and land cover change product that the USGS has been putting out for quite some time. The original release would have been a map from 2001. Prior to Annual NLCD, there were nine, what they call, epochs. So time-phased releases. So 2001, 2006, just snapshots in time that maps land cover across, we'll be talking about CONUS today. So the lower 48. ADAMSON: The conterminous United States. HORTON: Yes. ADAMSON: Or connected 48 states. There's lots of different ways to say it. HORTON: Yeah. The lower 48 or the conterminous U.S., or the I'll call it CONUS for simplicity. ADAMSON: CONUS is fine. HORTON: It basically maps the land cover at what's called a level two level of detail, which means instead of mapping just forest per se, it divides it into subclasses of deciduous forest, evergreen forest, and mixed forest. The Annual NLCD, which is the product we just released, is a continuation of that project, taking it to the next level. Basically, instead of mapping at the epochs like the legacy NLCD does, we are mapping those land cover and land cover change in those products annually. So every year we'll have a product from 1985 through the current release, which is 2023. And then in the future we'll release 2024, 2025. So it'll be land cover across CONUS every year, which is like the next step in NLCD. ADAMSON: What is your role in NLCD? HORTON: I work on the what's called the reference and validation, or the R and V, section of the project. Basically, we are collecting the data that will be used to test and check the accuracy of the Annual NLCD land cover and land cover change products. In a high level nutshell, we are collecting data, if you picture the world as 30 meters by 30 meters, a Landsat pixel, about the size of a baseball diamond. We are collecting the land cover and land change information at, oh, up to 10,000 of those locations across CONUS for every year. So basically my team, we get a plot, we look at it, we look at the Landsat data, we look at high-resolution imagery, and we go through and we say every year from 1984 through 2023, what was on that plot? What was the land cover? And then if there was a change, when was it and what kind was it? And then those labels that we're assigning for every year are going to be compared to those products at that same location, same year, to give a metric for how accurate the products themselves are. ADAMSON: Okay. Sounds good. How many plots are you looking at to validate? HORTON: It's going to be up to 10,000. We've already collected a little over 5,000. It's a two-phase collection. So phase one was the first 5,000. Now we're on to the second group, which could be up to 5,000. The final number is still being determined. ADAMSON: Are these pixels that you're looking at completely random? Is there a reason that any of them are chosen? HORTON: The first 5,000, that's why it's phase one. ADAMSON: Okay. HORTON: Phase one was completely random, so it was 5,000 pixels chosen across CONUS just completely random. We just basically bounded it to what are we mapping. So, you know, we don't go way out into the ocean, for example. And then just pick 5,000. Phase two is what's called a stratified random sample, which means once we had the products, the products that were released in October, we could use that to create bounding boxes for various situations, various classes, and stratify the second set of samples. For example, to make sure we have enough locations in rare classes. Because as you can imagine, with a random sample rare things on the landscape are going to be rare in the random sample, and you need a certain number of samples in order to even be able to do any sort of robust statistical calculation. If you only have 12 out of 5,000 in a class, it's not going to be statistically rigorous. ADAMSON: Not good enough. HORTON: Right. ADAMSON: Not rigorous enough. What is one of those rare classes? HORTON: One of the rare ones that we ran into, and people kind of jump when I say this because you don't think it's rare until you-- Keep in mind we're talking the entire lower 48. The entire surface of the lower 48. High intensity developed. ADAMSON: I am surprised. HORTON: So those areas that are basically 100% cement or building. ADAMSON: Yeah, yeah. Middle of a city, downtown. HORTON: Right. ADAMSON: Parking ramps. HORTON: Right. Roof of a shopping mall. ADAMSON: Yeah. HORTON: You know, those kind of things. ADAMSON: That's actually rare 48-state-wide. HORTON: Exactly. They're the part that jumps out people think of because we think of Atlanta, we think of New York City, we think of L.A. And they are-- relative to other cities they are huge. But relative to the Rocky Mountains as a whole, you know, or the Great Plains as a whole, they're a minuscule part. ADAMSON: It's a small percentage. HORTON: Exactly. So when I gave you the example of 18, that was, in the random sample, that was approximately how many high intensity developeds we got out of those 5,000. So that's a good example. ADAMSON: So the next phase in validation will be looking at a few more of those. HORTON: Right. ADAMSON: For example. HORTON: Right. Working on-- And without going into all of the statistics involved in how you determine strata in a stratified random sample, there are lots of papers out there or statistical textbooks that can go into all of that, but it's not fun radio. Basically, we use a selection method that allows us to somewhat target to those classes that we know we didn't get enough of, or at least areas we think have a higher probability of having those classes. We're trying to get to, you know, 200, 225, something like that, samples per class to allow for some statistical rigor. ADAMSON: All right. That's not something that I thought of that you would have to think about. HORTON: Those are the fun kind of things that I get to deal with and think about is how many samples do we need and how do we get them? ADAMSON: How many people are doing this work? How many people are on the team? HORTON: Well, obviously the Annual NLCD team is larger. If you're talking the R and V team and you're talking the people who are doing the, you know, boots on the ground, nuts and bolts interpretation type work, the data collection, we have five interpreters that are doing the, you know, those up to 10,000 plots, looking at them every year. There's also another-- there's also a group of people who are what I call my QA/QC, my quality checkers, who review some subset of plots that get sent to them for review for a variety of reasons. And then I've got people who are helping with, you know, data creation and management and all of that. So there's about eight of us, when it's all said and done, if you take it down to those nuts and bolts people. ADAMSON: Okay. What is it like when you're examining one of these pixels? HORTON: That's when I get to put on what I call my Sherlock Holmes from space hat. Our primary data collection tool-- our primary evidence is the Landsat data. And we use a tool called TimeSync to visualize all of that data. And it basically lets us see the Landsat imagery for that 1984 through 2023. Basically, every image, assuming it's not completely cloud covered, is available to us to view that. And we can use a variety of different band combinations. So, like, most people think of photographs and they're, you know, the true color. That's what they're familiar with. We actually can harness the additional information that Landsat has from, like, the near-infrared and the shortwave infrared and those kind of things, which are better at visualizing some particular aspects of vegetation or fire regimes, things like that. So we have the Landsat data. We also have what we call the trajectory, which is taking all of those Landsat images and mapping a value to our plot location for each of them. So you can see how a plot changes throughout a growing season, throughout a year. So you kind of get the feel for what does that plot do normally. You know how in the winter, deciduous trees are just bare. And then you can watch them and they start to green up and they start to, and then boom, they're green and they're green, and they're green, and then in fall, they start to change their colors and the leaves get red, and then they fall off. ADAMSON: And, of course, we can see that... HORTON: Right. ADAMSON: ...in the Landsat imagery that happens... HORTON: Exactly. ADAMSON: ...throughout the year. HORTON: Yeah. And that's what the trajectory values let me see. I get a feel for that. So I-- ADAMSON: And you're moving your hand up and down as you talk, which is kind of fun because we can't see that. But it's those changes is what you're indicating in the reflectance for that pixel. HORTON: Exactly. Yeah, I can see okay, green up and down and green up and down. I'll look at the plot. I'll look at the trajectory. And the first thing I look for is does that wave change somewhere? ADAMSON: Yeah, it might have a natural rhythm like trees, but you're looking for a difference. HORTON: I'm looking-- Yeah, that's one of the first things I look for. Does anything jump out at me as there's, you know, that the rhythm broke. Then what I can do is, using the Landsat imagery and using high-resolution imagery. Think Google Earth. That's something most people are familiar with. Those kind of images where I can look back and start to narrow down what's on the landscape, and if something changed, what happened and when. And then we record that for every year. So in my case, I usually have two, sometimes three computer monitors with the variety of information up that I and all the other interpreters, we-- Our specialty is basically synthesizing all of that information down so we can put nature in a box, put a label. In 1996, this plot was evergreen forest, and move on from there and do that for every plot every year. ADAMSON: Well, that's what's interesting here is you're not just looking at one 30-meter pixel, you're looking at that 30-meter pixel back through time. HORTON: Yes. Looking all the way back to 1984 and then looking up to 2023. ADAMSON: And you're also not just looking at an image. You're looking at the scientific measurements that Landsat gave you. HORTON: Yep. All sorts of different, they call them indices. You know, you can look at different calculations of those different bands we were talking about that can tell you different things about what was happening on the landscape, and then putting that all together and saying for this plot for this 40 years, basically, here's the story. ADAMSON: Yeah. HORTON: Here's what happened. And I'm biased, but I think it's awesome. ADAMSON: It is really cool. I like the idea that every pixel has a story. HORTON: Yes. ADAMSON: That's a really interesting way to look at it. So you mentioned this TimeSync tool. Are we able to kind of walk through and get a glimpse of what you do in that tool? HORTON: Sure. If you want. I mean, I'll try to describe-- ADAMSON: We'll have to describe what you're doing, but we'll give it a try. HORTON: All right, so like here is an example of one of the plots, or what we would see when we look at a plot. ADAMSON: Let's describe what you're seeing across the top there. There's a graph with-- HORTON: A lot of dots. ADAMSON: A lot of dots. HORTON: A lot of dots, and you can see on the bottom it's got the years. So '85, '86, '87. ADAMSON: And across the bottom looks like, well it looks a little pixelated and blurry because you're really really zoomed in. HORTON: Yeah. ADAMSON: But these are the Landsat images. HORTON: These are the Landsat images. And yeah, they look blurry both because I'm zoomed in and also because, as a general rule as human beings, we're used to what we see in a photograph. Keep in mind each one of those pixels is the size of a baseball diamond. It's going to blur, compared to what you're used to looking at. So yeah, so we've got the trajectory, that's all these dots. And that's where, when you were talking before, where I had the wave going, you can start to see the peaks in the growing season. So looking at this plot, you can see, you know, 1984 through somewhere around 1997, it's all pretty-- The values are all pretty similar. ADAMSON: Yeah, it's not like the same thing throughout those years. There's some variation, but it looks like, kind of rhythmic. HORTON: Yep. ADAMSON: There's a pattern... HORTON: Yeah. ADAMSON: ...over those years. HORTON: Vegetation has a pulse, you know, because even evergreen vegetation calms down a little in the winter. I mean, it's still photosynthetic, but it's, you know, it's got a shorter day to deal with. You're not going to have the exact same value every single day. But yeah, you can kind of see where it's a range that's pretty consistent and a pulse that, you know, a wave that's pretty consistent up to about 1997. And then you can see here, suddenly-- ADAMSON: There's a big change. HORTON: Yeah, suddenly the values drop pretty dramatically. TimeSync lets us, like I said, we can look at different band combinations. It also lets us look at a variety of different calculations. Different indices. ADAMSON: Okay. HORTON: So we can look at how, you know, wetness indices, we can look at burn ratios. We can look at all sorts of different things. Each one has its use and things that it's better at. What I'm showing here is one that's really good for showing what happened here, which is forest harvest. ADAMSON: Oh, okay. Logging is going on here. HORTON: Yes. ADAMSON: Okay. HORTON: So yeah, it had the-- the values were all pretty similar, and then this dramatic drop, which if we looked at the Landsat imagery, in the trajectory, you can see it's happened somewhere around 1998, 1999. Part of what we have to do as interpreters is nail down the exactly when, and if we looked in-- if we looked at all of the chips for 1998, which you can see I'm showing them here, this window here is showing all of the Landsat images from 1998. And you can see there's what, about 21 of them, looks like. ADAMSON: Yeah. HORTON: And we can use that to say, okay, here. That plot was pretty-- still pretty green tone-wise. Here's where it starts to turn brown, which is an indicator in this band combination and this indices of less vegetation. So we know that the change occurred in mid to late 1998, which is one of the things we as interpreters have to make sure we verify. What year did that change occur, because we're trying to map change every year. ADAMSON: And you know that this wasn't a natural change because of this graph that you were looking at first. HORTON: Well, there's a couple of bits of information that go into it. You can see a graph similar to this for a forest fire too. So the trajectory lets me know-- seeing that drop in the values lets me know a change occurred. Figuring out what the change was is a synthesis of the available evidence. ADAMSON: Okay, good. HORTON: So-- ADAMSON: That's why you're Sherlock Holmes. HORTON: Exactly. So what I would look at in this kind of situation, the things that point to it being a harvest, even just looking at Landsat, we haven't even gotten into the high-resolution imagery yet. But just looking at the Landsat imagery is the fact that it's a very geometrically shaped disturbance. We make the call based on what happens at that pixel, but we do use the context around it to help us figure out what the what is. So it's very rectangular, and things like fire don't tend to be geometric. I say "tend to" because of course there are things like prescribed burns which get bounded. The fact that it's geometric makes me think human caused versus a natural thing, like a flood or a landslide or a fire. Then, what I can do is look at the high-resolution imagery, which I actually have a synthesis of. ADAMSON: So like at this point, you're not ready to say this is definitely logging. HORTON: At this point, I would be confident to say that this disturbance occurred in 1998, and then it was something human caused. ADAMSON: Okay. HORTON: But for this, for our recording, we do want to, to the best of our ability, say what the disturbance type was. Was it a harvest, was it a fire, was it whatever. So if I look at the high-resolution imagery, like we talked about, what people would see in like Google Earth, that is what then helps me further narrow it down. In looking at the high-resolution imagery from 1993, I can see it was a forest. I already knew that because I recognize what a forest looks like in the Landsat imagery. But to double confirm. And then looking at from 1999, which is after that event we were just talking about, in the high-resolution imagery, you can see it was completely clear cut. There is no tree left. If it had been a fire, the signal in the trajectory would have looked slightly different. But also normally you would see debris. ADAMSON: Oh, I see, you might see some downed trees. HORTON: Downed trees or still standing dead ones or, you know, not always because sometimes they go in and they salvage log or whatever. But with what I was seeing in the Landsat imagery, what I was seeing in the trajectory values and what I'm seeing in the high-resolution imagery, now, I would be comfortable saying in 1998, this plot was a clear cut. And then if we go back, then the next thing is, okay, well, that's great. Now I know what it was from 1997-- To 1997, I know it was forest. I know it was harvested and clear cut in 1998. But now I gotta call the rest of it, too. ADAMSON: Yeah, you got to bring it up to date now. HORTON: Right. So going back to the, you know, if I was flipping through the high-res, high-resolution imagery, I know it goes back to trees eventually. And I actually also kind of cheat looking at this image. You can see they're in lines. So I also know it was planted. ADAMSON: It's kind of like a tree farm or something going on here. HORTON: Yes. Or at least managed-- some sort of managed forestry. And if you look, going back to our lovely trajectory. So we had that drop in 1998 and then you see how it's kind of got this slope going up and how it's kind of changed in color a little, goes from really bright red to kind of, I don't know, pea soup green. And then eventually it gets back up and it's a fairly similar, slightly different, but similar value range to what we were seeing before the harvest. And you've got that clear wave of, of annual pulse again. Basically, this plot was clearcut and took a couple of years for, you know, anything other than weeds and grass and, you know, that kind of stuff to grow in, some small woody vegetation starts growing in. We don't consider it a tree until it's more than 5 meters tall. So there would have been a period of what we would call, you know, low, woody or shrub until that woody vegetation gets to that minimum height, in which case it returns to trees. And this plot has been evergreen forest ever since. ADAMSON: Where did this TimeSync tool come from? Who developed it? HORTON: TimeSync was developed quite some time ago. It was originally the Forest Service and Oregon State University in around 2010. They originally developed it. The Forest Service was using it for one of their projects. LCMAP, which was a precursor, the project I worked on before Annual NLCD, was-- we collaborated with the Forest Service to collect reference data. So that's how we were introduced to the TimeSync tool. When Annual NLCD came along, we wanted to continue using that tool, so we had to bring a version in-house to customize it because the version that was originally used by-- developed by the Forest Service, you remember back when I talked about level two and deciduous, evergreen, and mixed? ADAMSON: Okay. HORTON: It did not have level two level of detail. It just had forest. ADAMSON: Forest is forest. HORTON: Right. So we had to make sure it could collect the level of data we needed to go to this level two classification. ADAMSON: Clearly, you can't do this to all of the billions of pixels that cover CONUS. HORTON: I could, but it would take a long, long, long, long time. ADAMSON: Just thinking that there must be some automation, and what you're doing, does what you're doing inform the automation that happens to identify all those billions of pixels as their land cover types? HORTON: There is a lot of machine learning and automations and algorithms involved in the creation of the Annual NLCD products, the classification, putting a label on each of those billions of pixels like you talked about. What I am doing right now with the reference and validation, there is some automation in things like how we, you know, helping us select those 5,000 random ones from all those billions of pixels. But what I do right now, a lot of it is actually manual interpretation. Part of having the ability to have automations or, you know, AI, machine learning, is that you need to have high quality, robust, and a lot of data to train the AI on. ADAMSON: Okay. HORTON: And there's not a lot of datasets like the one we're creating here, the reference and validation. So there's not a lot of data to train an AI on yet. Part of the appeal and the interest that I constantly am hearing from, like, when I went to AGU and presented, there is a lot of interest in the reference and validation data itself, because there's not a lot of datasets like that out there, because it's a lot of manual and time intensive, and you need people with either the ability to learn or the specialized skills to do what I just showed you, which is not intuitive to a lot of people. Our data has to be collected independent of anything labeled on the product maps. Otherwise, you know, if we looked at the map products and said, well, what does it call this point, we'd be teaching to the test, which would bias the accuracy assessment. ADAMSON: Can you talk more about what the reference dataset is important for? Are there uses outside of this accuracy assessment? HORTON: Potentially, yes. And based on the number of questions I got at AGU, there's people who want it. So obviously the primary driver for the collection right now is to provide accuracy metrics for the Annual NLCD products. That's the primary driver right now, to get that data to compare our labels to what the map algorithm labeled those locations and give the users a metric for how accurate is this product. ADAMSON: The NLCD users? They know this is accurate. HORTON: Exactly. So they can see for the year or years they're using for the classes they care about, what is the accuracy. As far as other uses for the reference data? Its potential for people to, if assuming that their label, their labels, their land cover definitions, align with what was used when we collected the data? I could potentially see someone using it to test their own types of classification algorithms. There's a lot of research going into different types of machine learning algorithms for classification of land cover and land cover change, or using our data for some sort of, you know, other training or testing. We're also doing something kind of unique with our collection in that we talked about each interpreter collecting at those plots, 50% of our plot locations have another second interpreter look at them independently, and then we can compare those labels and look at, you know, do the interpreters agree on what is at that location those years? And there's been several papers over the last couple of years looking at how does interpreter consistency and interpreter agreement in a collection like this feed into overall accuracy metrics and area calculations and things like that. ADAMSON: Do you know if this type of accuracy assessment that you do is unique to NLCD? HORTON: There's a lot of work going into different land cover maps right now. Everything from small, you know, area mapping to state, country, even people attempting to do global maps. And the gold standard of accuracy assessments for those types of products is to do something similar to what we're doing. Collect an independent, air quotes here, ground truth label at those locations and use that to calculate the accuracy of your product. There are projects out there doing this, but there are also a lot of projects that do other versions of validation. Things like using a subset of their training data. They set that aside. They don't use it to train their algorithm. They compare that to get their accuracy metric. The fact that we're collecting it from 1984 through 2023 and across all of CONUS, I don't know that anybody else is doing anything at that level. There are definitely other people out there who do photo interpretation. But to do it across that length of a time scale and at that, Anderson, at the level two level of detail, there's not a lot of that being done out there. It is the gold standard of how you validate a land cover product, but it's tough to do. ADAMSON: That's why it's gold. HORTON: Yes. If you're going to create a map, you should tell people how accurate it is. So creating the reference dataset, doing the validation, is what will allow the users to be able to look at those products and know, yes, this is a product I want to use. It works for my area of interest. It covers the land cover I want. Also kind of on a more philosophical side, knowing the accuracy, and by extension the inaccuracies, is what allows us to move land cover mapping science forward. You know, if we know where the current algorithm struggles, or we-- I know regions where there are particular challenges. That's what lets us then work to move the science forward and address those problems. If you don't know there's a problem, you can't fix it. ADAMSON: What's the one of the biggest challenges in doing this work? HORTON: The challenge is coming up with protocols and definitions that are clear and consistent, can be consistently applied across multiple interpreters and across time. We need protocols that basically would let five different people look at this location and come to the same conclusion. And not just the current five. In the future, if we get a new interpreter comes in or someone retires, that knowledge needs to be transportable. And it's not as simple and clear cut as people tend to think. ADAMSON: Well, a tree is a tree, isn't it? HORTON: Yeah. No. That is my classic example. I spend a lot of time discussing what makes a tree, and then how many trees does it take to make a forest? Because what is a tree, if you ask a botanist, an arborist, a forester, and a homeowner, you are going to get four different answers for what is a tree. I can't have four different answers for what is a tree. Luckily, I love it. It's a puzzle to me to figure out how can you define something, especially when nature don't like to be put in buckets. Nature doesn't like to be stamped with a label. Nature likes to throw curveballs. So figuring out those labels and those protocols and making sure that they're as clear and unambiguous as we can is a challenge. But it's also a lot of fun. ADAMSON: Oh, that's cool. Well, maybe this is the same question. What's your favorite thing about the work that you do? HORTON: My favorite thing is the story. Being able to look at a unique perspective of the world and figure out what happened here, and knowing, figuring out that story, you know, for all of these different locations. I've looked at tens of thousands of locations at this point, and I still find things that I'm like, I have never seen that before. It gives you a unique perspective on the world. It's kind of like I got the overview effect, but in miniature. You know, that feeling that astronauts talk about when they are up in space and they see the Earth, and it just changes the way you view the world because you realize how big and diverse it is. I get that, but I just sit at my desk. ADAMSON: Thank you to Jo Horton for talking with us about her work on reference and validation for Annual NLCD. And thank you, listeners. Check out our social media accounts to watch for all future episodes. You can also subscribe to us on Apple and YouTube podcasts. VARIOUS VOICES: This podcast, this podcast, this podcast, this podcast, this podcast is a product of the U.S. Geological Survey, Department of Interior.