JIM GUYTON: OK, next, we'll have our Director of Engineering Chris Meinig come up to give a talk on research innovation. CHRISTIAN MEINIG: Great. Thanks, Jim. Thanks, everyone. So I'm going to give the representing work of three groups. It's going to be the Engineering Group, the Innovative Technology for Arctic Exploration Group, and Science Data Integration. First question we need to ask is who cares? Why bother? Why do we research? Why provide innovations and research? So we've heard from the talks on the science side that the oceans are major drivers of climate, carbon storage, weather, fisheries, ecosystems, and 2/3 of your last breath was thanks to the ocean currents. Tourism alone is $124 billion dollar industry in the US. Yet we spend surprisingly only $200 million a year for in-situ observation. To give you some sense of scale, I have time wrapped in my head around this number. That's about 20 days of what General Motors spends in marketing budget. If that is what we allocate towards our in-situ observations, it's stunning the societal choices we make. Right, so clearly there's a need to develop new technologies and provide innovations to improve these forecasts and further understanding [INAUDIBLE] you heard about today. And this focus today is going to be exactly that how we develop technology here at PMEL. So from the overview, the Engineering group has been around since inception of the lab. Really they've been a key part of that component. And we have 16 people there. We have five engineers within the group. You're going to meet them all today. We have a Science and Data Integration Group that consists of five people, including Eugene Burger. And that's really a new function since the last lab review. Our Innovative Technology for Arctic Exploration group is also new as of the last lab review. And consists of four people, but connects with many, many more of them across the board. So if you look at it, between these two groups, between all these groups together, we're working kind of like the picture going on back here. As groups get connected and linked together and they further their collaboration, the picture emerges of what the observing system should look like. And that is what we do exceedingly well here across these groups. It's connect, form partnerships, and then evolve the observation systems towards broader understanding. So what does that look like? Again, it all starts with science drivers. You start here on the left hand side. Without this you don't proceed further down the line. Once we understand these drivers, we ask very, very smart questions about them then we can proceed onto sensor integration platform design. We start doing small-scale testing. Start folding them into operation. We automate data and workflows along the way ultimately leading to integrated research missions where we combine all these things [INAUDIBLE] science help with. Some of those things we'll get onto transitions. Some of them will go onto private industry or internal to other PMEL transitions or to other line offices. They will evolve to transitions. A key part of all this is really that we form these partnerships with the right expertise and establish a feedback loop. So this cycle and iteration happens quickly. And often there's frequent communication across all the groups at PMEL [INAUDIBLE]. We start with the Engineering group, our relevance here is really on the end-to-end development. We'll see on the tour today. We build things. We actually make things from the ground up and apply innovation to them. So the process starts with re-evaluating the opportunity. Again, for the science drivers there, does this fit within the OAR vision? Form partnerships that need to define that expertise as we move along. Next step in this as we start developing those ideas, they might start with prototypes, small scale engineering designs, start doing local testing. We're very blessed here to have a proportion of Puget Sound which is very quickly make our mistakes ultimately leading up to a full scale ocean test before we do that. At this stage, we start getting an idea of what does that look like for transition? Towards those partners we need to reach out to, have we made good enough connections to explain what we're trying to do with these processes? Ultimately, we get to the launch stage where we go to full scale fabrication. We start integrating the platforms multidisciplinary sensors. We start deploying them. We gather feedback and evaluation from scientific papers. Ground truth these with gold standards these measurements that we made. And our ultimate impact leads to science engineering publications and transition to sustain research or further on that to meet NOAA's missions. What you're not going to hear about today is the hundreds of parts that end up on the floor. The thousands of lines of code that never get into, the bins full of circuit boards that winds up in the recycle. It's key though. It is at the heart of what we do here. We're not failing enough that means we're not innovating. So I am extremely fortunate to be part of an organization that gives me this license to fail. So I can thank Craig, Gary, and now Michelle that we can use it. And it looks something like this. That our group has got this license to fail fast, learn cheaply and go forward. And that I give this to the entire group and the team, says, go ahead and do it. We got your back. If you're not failing enough, you're not trying hard. This freedom to fail gives us clearance and headroom. Ultimately, we are responsible for new breakthroughs. Without that freedom, the motivations don't come up, the connections don't make, we don't take the risks in order to do business. A good example of that is the Global Tsunami array. It started off with a research project. It was something that is inherently a federal research lab is better than anybody else. Tens of years of development to make a bottom pressure gauge that has a quarter millimeter resolution ultimately connecting it to real-time comms, connecting it to Warning Centers, integrating into a forecast, and delivering them. Our DART 4G system give the latest one of the near field was ground truthed off of Chile. Chile gave us open access to their CNO and said, what do you need for ships? How do we develop this? How do we make this forecast great? So our international partners are asking for this technology because they have the problem, just like we do over the Pacific Northwest. The latest example is New Zealand where they purchased all of the new generation technology. So [INAUDIBLE] the array, the entire [INAUDIBLE]. 50 years go into the ITAE program. Our mission is to build, and connect, and explore. Specifically, for arctic and subarctic technologies. We address gaps in observing technology. Things that don't exist yet have scientific need. So what might those be in the Arctic? Here about the great span of distance we have to cover, challenges of ice, extreme weather, darkness. A few examples of the technologies from our saildrone platforms, to novel buoy platforms, Oculus Gliders and some comparative floats have come out of that program. The relevance is really expressed in the number of stakeholders that we have. We have over 46 partner organizations. And this is one of the few programs where we're involved with every single NOAA line office. We have transition products, we brought them on board from a variety all the way from mapping to adjust in real time data from the weather forecast models. As I mentioned, saildrones are example that Calvin's going to talk about. This one program alone has over 70 engineers and scientists from 10 different companies, 3 CIs, and every line office involved in the development. And our mission here was really how we adapt and see this development. Typically, with my experience, ocean observing technology takes about a decade. To get from idea to full scale operation, we did it here in 4 years. That's something different. That's something few places on the planet can do better. And that was something I heard just last week when I did a talk. What an incredible model of collaboration? An additional program here has been in our PRAWLER development. What you're looking at here are chlorophyll signals from our mooring at M2. So these green signals are chlor-a. This is from 2016, 2017, 2018, 2019. This is from conventional instruments spacing. Typically, two instruments at two different depths are trying to get a time series of what's going on. A profiler as [INAUDIBLE] will know is more valuable. By courtesy of PRAWLER here that uses wave energy to convert the motion of the buoy. The locomotion, it can climb and fall and it profiles. You get an integrated profile. And just to go back and forth here, which would you rather have? You notice, this entire band of 2017 would have been missed without a profile. The entire new things have been discovered in the first few years of [INAUDIBLE] Bering Sea. Our Science and Data Integration Group come out of the last slide with you. And the purpose there is to put greater emphasis on our data workflows and PMEL's data sets. In the end, this is the discipline, this is the goal that comes out of all these observing systems. We need to handle it carefully. And this group has shown advancements at the PMEL-level, OAR-level, the NOAA-level and internationally across all these things. Examples locally is they provide the engineers with data so we could rapidly assess the efficacy of our systems, by finding plots, different plots, gold standard plots that allow us to know, OK, we're ready to go to the next step. From the lab to Puget Sound, from Puget Sound to the ocean, without actually looking at data value plots, we're not going to get there. At the OAR-level they're needing many efforts and data management activities and with the goal of division at the NOAA level providing these type of services and all the way up to international level providing leadership from a lot of these data management processes. They also develop their own data management tools and software, things like Live Access Server and PyFerret. So this graph here is just to show what the time process looks like. From up here, you have the process of how we develop-- We start with the conceptualizations, design, getting all the way to science applications. The scale up here indicates the group's level of activity. So engineering would start at it early on. [INAUDIBLE] as we get to science application. Scientists fade and get heavy towards the back in the data group. What we've learned over time is that traditionally, we have been employing the groups in this stage. So the data group was basically getting involved too late. We noticed that would provide problems in formatting of the data, in the metadata of how we transition things from microprocessors all the way up the scale to include data calibration systems. And what we learned is to get involved earlier. Now get everybody involved right from the beginning. We start talking about the value of this and the data and what might the data look like. What are the metadata cables look like? How do we contribute properly to that? So we have this discussion much earlier, which folded into many of our other systems. The payoff has been in its early stage. The conventional in the early engineering support stages that during the sensory integration platform development you catch your errors quicker. And those would be simple things from offsets to translation issues to fit error counts. We get all that much earlier in the process stream. On the back end of the whole system, these data workflows are efficiently recognized by doing machine to machine learning, machine to machine transfer. And more efficient data landing process. We don't have the slow parsing efforts anymore. These are all automated which allow people to get the data in front of them whether they're engineers, the scientists, the data groups. However, they like. And whatever the format they're used to working, get them in front earlier. And one thing for future applications is this tag here between the platform, satellite and coming down to the grounding station, there is some concern about beam integrity. There's many other types along the way right now. But other than doing check error sums, this is a place we have to be concerned about data integrity. As this comes off the platform, how are we going to do that in the future? Might there be some vulnerabilities that we haven't yet learned? This is going to get more and more important as those data pipelines open up and we start streaming much, much higher bandwidth data and [INAUDIBLE]. So what are some key output measures out of all this? We've had six CRADAs in the last four years. We have two MOUs. We've completed 22 transitions. We have 11 transitions underway. And we've got one active patent license and two patents and three trademarks. Our patent license, SAIC produces the tsunami buoy. They've got $30 million in commercial sales in the last four years. Portion of that revenue comes back to the lab. The lab director chose to invest towards new technology or education. There's five or six things that the Michelle will choose from. [INAUDIBLE]. Challenges. We've all got challenges, engineers work a lot on trade so try to solve those problems. Can our human systems keep up with automation. How do we make decisions? What are those decisions based on, now that we have more automation coming online? Can our human systems deal with the way and the nature that the environment's changing? How do we change things? Business as usual just is no longer going to be acceptable. We have to change the way our human systems combine together to take advantage of this and respond to the need of what's actually happening in the environment. Our data handling is a crisis [INAUDIBLE]. We need to invest more in our data handling to extract all that value at various times in the change. Data is valuable from the second it comes off the platform. What's also valuable in a couple of years after those papers have been written you need to curate and take care of that data all along the way. Our government systems are just not keeping up with demands of the modern world. We need to shore up our infrastructure support and support these types of systems. And on this arc of breaking through to transition all along the way I think we need to have technical skills and project management skills that help execute projects that are only growing and complex. You've heard about some of the science and technology as you want. All this speaks to the level of complexity that's happening when you manage a program. Local thing in Seattle, Amazon everyone knows. We're just engineers for-- salary for engineers and data scientists is outrageous. Our external collaborations we're really nothing without our partners who span across industry, academia, and public agencies. One highlight [INAUDIBLE] a claim where we transition the power, improve the design for manufacturing and making that available now as of last year [INAUDIBLE]. In our research institutes, an example of collaborations which two Research Foundation where we're taking our pCO2 package-- our latest package-- and this week is getting integrated into wave gliders being deployed off Hawaii. They're going to deploy those at scale with leveraged PMEL technology for global observations. An example on the public agency and tribes is a [INAUDIBLE] with WMO who are taking open GTS project what makes the data available in real-time and the GTS for global consumption. Some emerging areas, partnerships with industry and foundation are important, but--caution here, right?-- we need to handle these carefully, these relationships carefully, each one is unique and you need to understand the culture between these two organizations to make sure it's a win-win for everybody. The robotics revolution is coming upon us--adaptive sampling is here. How are we going to make best use of it, what are you going to do with it? We're literally swimming in an oceans of in-situ data. How do we do that? One idea might be to outsource real-time QC data to those experts so picture an integrated platform with many, many different agencies QC'ing their own unique data set as we get into more complex data systems. This is my summary. I've highlighted some of the quality relevance performance through these three groups and I feel that they are critical for us expanding our ocean observations and implementing breakthrough technology on the observing things that we are tasked with here at PMEL. And I'm going to end there.