Webinar on 'Remote Inspection: Part 3 - Reality Capture for Remote Inspection'
Webinar Recordings
To access our webinars recordings we recommend that you use the Adobe Connect application which can be downloaded for Windows or Mac devices. If you are unable to install the Adobe application, you can use a web browser, however Internet Explorer does not support Adobe Connect webinars or recordings.
Webinar Transcription
Transcription for Webinar on 'Remote Inspection: Part 3 - Reality Capture for Remote Inspection'
Remote inspection: Part 3 - Reality Capture for remote inspection
Speakers: Paul Bryan, David Andrews and Jon Beford
Paul [00:06] Good morning, good evening and whatever else there is. Welcome to the third and the final [indistinct] of the remote inspection theme, where we've been looking at geospatial technologies to aid the remote inspection of heritage object structures and landscapes. My name is Paul Bryan, and my contact details will be coming up shortly, but just to remind you that this is part of the Technical Tuesday Technical Conservation webinar series that's been running for quite some time now, and it's proved extremely successful given the number of attendees that we've had and the people listening to the recordings.
If you want to know more about the other webinars-- I'm sure you've all found the link already, so I think that's where you'll find the recordings. And if you actually want to know more about the work of Technical Conservation team within Historic England, there are a couple of links there that'll give you information and advice that we provide. But also the quite, sort of, detailed and lengthy guidance brochure. Historic England is constantly, sort of, generating new guidance, so please download that, and you'll see some of the geospatial-related guidance that's in there as well.
So moving on to the theme of today. Here we're talking about this term 'reality capture', which, if I was talking to you about ten years ago, probably wouldn't be in my vocabulary at all. It's a fairly new term that's come into our area. But once again, we're sort of talking about the remote inspection theme. On this slide is my contact details. I'll just highlight the three images at the bottom of the slide there. The first is ARPAS UK, who are the association for remotely piloted aircraft systems in the UK, basically the UK drone association. We're a member, and I'd highly recommend anyone interested in the use of drones to approach them as well.
Moving to the right, you see The Survey Association, or the TSA. They're basically the professional organisation for surveyors in the UK. Once again, we are members of that, and I would highly recommend you looking at the website, because they have a mass of downloadable free guidance on, not just the technologies we're talking about today, but others within the realms of survey. In the middle is what look at as the event that us geospatialists, if there's such a term, look to attend. That's called GEO Business. Sadly, that's been sort of cancelled several times due to what's happened over the past year, but I'm keeping my fingers really, really crossed now that in November we'll all be able to meet up again, and I'd highly recommend anyone who's interested in what we're talking about today, and even the other sort of approaches we've spoken about, attending that event. Alice kindly mentioned the two colleagues that we have, Gary Young and Lizzie Stephens, who are there. They will hopefully be answering any of the chat questions that come up.
So, just moving on. I'm joined in the presentation by my colleagues David Andrews and Jon Bedford. They're going to be giving the majority of the presentation. I'm really the sort of top and tail of it, sort of introducing this term 'reality capture', and then I'll provide some sort of summary points at the end. And you can see what David and Jon will basically be speaking about.
But as we've been doing on all of these webinars, we'll start off with a poll question just to get some sort of user participation. So you've got a question there. How many of you undertake remote inspection surveys yourself, whether you commission them from external contractors, you have a need for it but don't know where to go, don't use them at all but may be keen to learn? So if we just sort of let that to go. I think you can only sort of click on one. Interesting how the final one is jumping ahead in the lead. That's interesting. Right, give it a little bit longer, bearing in mind we've got 189 people attending. Right, let's close it there, Alice, if you can. So it looks as though the final one, 'don't use remote inspection surveys but keen to learn more', that's really interesting because that sort of gives me some ideas for what happens after this webinar series finishes.
So let's move on. So reality capture. Many of you will probably not have any idea what reality capture is. There is a definition that appeared in one of our guidance documents that you see on the screen there, and we class it as the direct integration of data that's derived from these two techniques called photogrammetry and 3D laser scanning. A month ago, where we spoke about 3D laser scanning for remote inspection, but we haven't really focused on the other approach, photogrammetry, as well. And it's not just a case of using 3D laser scanning and basically sort of wrapping it in sort of an image texture. There's a little bit more to it than that, which my colleague Jon Bedford will go into when we look at the case studies.
But before I pass on to David, just looking at the pros and cons or the advantages and disadvantages of both approaches, because even in 2021 there is still no one technique that will do everything that we would require within heritage. So if you look at photogrammetry, laser scanning, they're both applicable on all 2D and 3D surfaces, so effectively you can use them both for what we find across heritage. They both generate high-resolution 3D point data. But for the photogrammetry, it's more a case of the post-processing, or the back office post-processing. For laser scanning, it's more in-the-field data capture, and then it's a case of registering that data set together.
Photogrammetry these days can use a variety of off-the-shelf cameras, such as the one that Jon is showing in the left image there. You see he's using a Sony Alpha 7R II. I believe it's his. There are people in the audience who probably remember the days of metric cameras for photogrammetry. You can still use them for that, but the majority is done using off-the-shelf cameras. Laser scanners, as we showed in the last webinar, they're quite sophisticated devices these days for capturing a mass of 3D point data. Most of them capture an image of some form. 360-degree images can be provided and even some can capture thermal, and I'm hoping in the near future that we'll have multi and hyperspectral capabilities coming into some of the commercially available devices.
Multi-image photo-- Sorry, on the photogrammetry side, it uses multi-image photography, which Historic England, we have an archive down in Swindon, where all of our imagery is deposited. That forms an excellent archival record for heritage structure. Laser scanning generates an archive record, but you'll see I've used the term 'good' there. I haven't actually said they're both the same, because laser scanning, it's only as good as the resolution that is chosen by the operator, such as Gary, as we see on the screen there.
And then the final advantage with photogrammetry, as you'll see later on with Jon, the use of drones is certainly enhancing the coverage that we can capture and generate when you're dealing with sort of large, complex heritage structures. For laser scanning, we have seen a number of mobile solutions that are sort of adding to this, but we're also finding that terrestrial scanners, as you see on the screen there, they're becoming faster and faster, so there's like a convergent of the mobile and the static towards, sort of, very fast data capture in the field.
Looking at the cons, or the disadvantages, each technique won't work on everything. It's worth sort of remembering that photogrammetry needs texture of some form. Fortunately, heritage structures tend to have quite a lot of visible texture. They tend not to be like magnolia walls like I'm staring at in my study in North Yorkshire. Laser scanning, it'll reflect off most surfaces, but if they're reflective or translucent, you going to produce, effectively, inaccurate data, so you have to be a little bit careful effectively what you fire your laser scanner at.
Photogrammetry needs more than one image. I remember the days of stereo photogrammetry. So by generating multi-overlap photography, you're generating large image sets, and similarly with the laser scanning, you're generating extremely large data files that are sometimes often difficult to view by the end-user client. There are techniques to help that. And it's not a case of just sort of click, click, click, or even sort of scan, scan, scan. You have to think about where you position the camera, where you position the scanner, to actually sort of maximise the coverage. So for instance, with a camera don't just rely on the GPS that is often in a mobile phone or a camera these days, and don't always rely on laser-scanned cloud-to-cloud registration. We talked about this in the previous webinar.
And then finally, disadvantage. The softwares that you'll see David and Jon talk about, they're effectively black box technologies these days. And you feed something in, you'll get something back out in terms of data. But if you feed inadequate imagery or inadequate sort of scan data, don't expect adequate or sort of excellent results coming out the other end. Think about the quality of the input really. And then with laser scanning, heritage has long used sort of line drawings, effectively an interpretation of the structure. I'm still waiting to see automated feature extraction work fully on a heritage structure. I've been waiting probably about 30-odd years now, so seeing as I'm off to retirement later this year, I might have to keep an eye on what comes out.
So automated feature extraction still not working satisfactorily for heritage. So that's enough from me, sort of just introducing reality capture and the pros and cons of the two component techniques, so I'll hand over to David now, who is going to start off with a poll question.
David [12:04] Hello. Good afternoon, everyone. Yes, so the questions there are how familiar are you with photogrammetry? So there's 'Very familiar', 'Familiar but get others to undertake it', 'Have heard of it but not sure what it is' and lastly 'Photo... what?' But I've seen some people in the audience who I'm sure will be answering the top question, but equally we want to pitch the talk to be as inclusive as possible. So honest answers are very welcome.
OK, so I think we might be getting towards the number of people who are going to vote, so it's good to know that we can see that quite a few people are familiar but would like to know more. So I'm just going to move on to my first actual slide. So this is just a diagram that shows us what happens when we take a photo of a 3D object. We obviously get a 2D projection of that object in the camera, and the whole raison d'être for photogrammetry, I suppose, is to reverse that process. But to do that, we obviously need-- well, not obviously, but I hope to demonstrate that we need at least two photos and these days quite a few more for the subject that we're investigating.
So there's some basic principles in photogrammetry, or concepts in photogrammetry, that it'll be helpful for us to understand. And although, as Paul says, the software is generally black box these days, it's normally addressing these three things here, and we'll at some point need user intervention to facilitate these things. So the interior orientation is all about the characteristics of the camera, so the focal length. The principal point is the point in, to put it bluntly, right in the middle of the lens as it's projected onto the image, and then lens distortion is something that affects the accuracy of our end results, and the more we can take account of that, the better.
So part of recreating the 3D geometry is the relative orientation, and this is where we or the computer work out how each photo, or the camera position that took the photo, relates to all the others that form a model, so it's calculating a relative x, y and z coordinate between each camera position but also the attitude, as we call it, or the way the camera was pointing. So that's roll, pitch and yaw in aviation terms-- boating terms, even.
And then the third concept is absolute orientation, which, once we've got a 3D model, just because it's photographs, there's no innate scale like we have with laser scanning. So we need some way of applying a scale, and we also need to relate that 3D model, or we normally do, to a site coordinate system or maybe a local coordinate system, depending on what we're doing. If it's remote inspection of a building, then it's going to be useful to have it related to the site coordinate system or even the National Grid. And for those of you who've heard of exterior orientation, that's what we call the combination of relative and absolute orientation, so that's what goes on exterior off the camera.
Just a bit more about interior orientation. So you can see from this cutaway the amount of glass involved in a camera lens, so that helps us to understand quite how images can be distorted by refraction and so on in imperfections in the lens manufacture. In the old days, when I started photogrammetry, we would have to have a camera calibration for our cameras, which were often specially made, what they called metric cameras, and that would normally be done by a special laboratory or the manufacturer of the camera or even-- As things developed, it became possible to do that yourself, but it was still a tedious process. These days, the modern software that most people use self-calibrates the images, but they will also allow you to put in calibration figures. So the calibration, just to reiterate, will tell us the exact focal length of the lens, where this principal point is and different ways of describing the lens distortion, and that allows the software to correct for that distortion.
So here's just-- On the top left, there's an image of one of these calibration charts. Then at the bottom right, that's all the different photos you'd have had to have taken of that one board, and then import those images into the software for it to calculate the various different parameters of the interior orientation.
So, moving on to the relative orientation. So, as I think I've indicated, it's basically the positions of the camera when the photographs were taken, and this is probably not the forum to try and go into the details of it, but I think this diagram sort of explains what's happening. The rays of light are intersecting onto 3D points and projecting back into the camera. So it's a process known as image matching, these days, and just-- In the old days, it was a manual process of basically finding matching points between two stereo images and making various adjustments so you ended up with a 3D model that you could see in a photogrammetric instrument.
Nowadays, the software will-- apologies for this photo being on its side, but the system still works. It doesn't actually, at this stage, need to know which way up the photo is. So image matching in the modern software will basically look for easily identifiable points in the image, and these are identified by the software, not by you or I. We would probably pick out different points from which the software does, and it's doing it on a much finer grain, as it were, than we would. So all these blue dots are points that the software's identified in that photo, and they call them key points, so they're points that it's hoping to find in the next photo. So this is the next photo – another load of key points that it's identified.
And then if we move on, I think the white points are the points that it has worked out are common between the two photos, but not just the two photos. It's actually working on the whole set of photos that we've put in as imports. So along the bottom there, you can see a number of photos, so it will attempt to match every photo to every other photo. Obviously, there's some photos that just aren't overlapping other photos in the model, but the software will basically work out where all the different photos were taken from.
So in this image, we can see a barn, and the blue square are all the photos that were taken around that barn to make the photogrammetric model. And at this stage, it's being presented as what we call a dense point cloud, which I will go on to explain in just a moment. Here's another image of camera arrangements. On the left, that's a landscape that a fixed-wing drone has flown up and down taking photos, and the software has recreated the positions of those photographs to then produce that model of the landscape, which although you can't see it from this distance, is a tin-mining area.
And then on the right-hand side, another set of drone photography of-- Again, it's slightly hidden by the icons for the cameras, but that's a building, which will have had photos taken from the ground but also so as to get the, like, the wall tops and the windowsills and so on. Drone photography's been acquired as well. And just to say, the previous slide, some of those upper photos were taken using a photographic mast, so that's, depending on the size of your building, that's an equally applicable way of acquiring those photos that you couldn't acquire just from the ground.
Moving on to take the photographs to be able to produce the 3D model. So these diagrams are showing good and bad ways of taking photos of various different subjects, but the main point to stress is that photos have to be overlapping, and the general consensus these days is 60–80% overlap is going to get a good result. If there's a 100% overlap, you won't actually get the 3D effects, so it has to be somewhere between about 50% and 80% to get a good result.
So the next bullet point, just reiterating that the photographs have to be taken from different positions. So on that top left, you can see the incorrect photos of a facade have basically been taken from the same position but just looking in different directions, and then the corrects ones, the photographer has taken a few steps to the right each time as they've taken the photos. I guess it's obvious that photos should be correctly exposed, but we're not looking for artistic photos for a photogrammetry. We want a good, even exposure over the whole image so that there's consistent information for the software to work from. Ideally, the whole image will be in focus. In the areas that are out of focus, the image-matching wouldn't work, so it's always good to address depth of fields. So the better the depth of field, the better range of-- hopefully, the whole photograph will be in focus if we have a good depth of field.
The camera shouldn't move while the photo's being taken, or it shouldn't move to the extent that it would cause motion blur. So obviously with a drone, the drone's probably moving when the photo's taken, so that means you need a much faster shutter speed. And if you want to get good depth of field, you normally need a slower shutter speed, so that's where-- especially for photos taken from the ground using a tripod can be very useful.
Just going back to correctly exposed-- The best way to get a good even exposure is to take the photographs on a bright but overcast day. So we don't want deep shadows in the images because those will just appear as areas of deep black that the software won't be able to identify any suitable key points in, and equally, we don't want anything that's really brightly over-exposed because again, in the image, it will just seem like an area of white. I think Paul touched on this earlier as well. For photogrammetry, we can't really work with plain white walls, for example, because there's just no texture there for the software to try and identify key points in.
And then I didn't actually put it on this slide, but it's probably worth mentioning that within reason, the higher the resolution of you image, the better results you're going to get, so if your camera can be set to its highest resolution when you take the photos, then that's the thing to do. There is obviously a slight payback in that the images will be of a larger size, so there might be more hard disk space required and so on. But it's always better to take the photos with the highest resolution possible, and then if you're struggling, you can think about reducing the resolution if necessary. So if anyone's even heard the term 'ground sample distance', the higher the resolution of the photos you take, the more likely you'll be able to have the ground sample distance that you want, which is basically just the size of pixel in the photo the end product represents in the real world.
So moving on, this is just to go over the software process. So when the image-matching occurs, what we call a sparse point cloud will be produced. So this is all the points that it used to recreate the camera positions, and then once it's achieved that, it will fill in all the gaps by going back to all those key points I mentioned and projecting those into the 3D space, so we'll end up with this model here that looks a bit grainy, and that's because it is just these points.
The next stage would usually be to actually create a surface, so fitting a surface to all those different points. It's normally a triangular mesh. It's a bit difficult to see in this image, but that is a load of triangles. Those triangles can also be coloured by the colour from the photographs, so that's just applying the relevant colour to the points in the mesh. But to make it look better, we actually would want to texture the surface using the original photos, so hopefully, you can see that this image here of the model looks a bit nicer, and that's because the 3D surface has had this texture from the original photos projected onto it.
So I mentioned the absolute orientation as being the second component of the exterior orientation at the beginning there, and in traditional photogrammetry, the way to achieve that would be to, well, either have a scale bar, but if you want to relate everything to a site grid or the National Grid, have some targets in the photographs which you'd measure with a total station theodolite. You'd need quite a few of these targets. Maybe not as many as shown in the graphic on the right, which is basically talking about stereophotogrammetry in the old days, where you used to have-- you'd have four targets per overlap, but you would need at least four targets for the whole model there. Or, and I won't steal Jon's thunder on this, the other way of doing it, which is one of the main concepts of reality capture is combining the unscaled photogrammetric model with laser scanning to get the scale and also a 3D position.
We can have 3D models directly from photogrammetry without going down the reality capture route. So the 3D models could be used for remote inspection as a way of just getting someone in office space to be able to inspect a structure. So this burial chamber crumbling thing here could be provided as a navigable 3D model that someone in their office could zoom into, rotate round and so on and basically do their condition survey from the comfort of their office. We can also use photographs from drones or masts to make the model of inaccessible areas. So someone on the ground examining this structure here wouldn't be able to see those – I don't know if you can see them – bronze cramps that are holding the capstone together. They only became apparent when we took the photos using a mast.
So a 3D model is great for getting an understanding of the subject, but it's not always the easiest thing to annotate with findings or put into a report or whatever, so this is where would come on to an orthophotograph. So an orthophotograph is basically a 2D projection, an orthographic projection of that 3D model that we've created. So here's one of our office in York. You can see all the different types of brickwork and treatments, and it involved-- it's a 2D drawing, so it could be put in in AutoCAD and you could mark up, for example, what needs doing to every window on that facade.
So as Paul said, a lot of people still want to work with line drawings. So I think we talked about in the previous webinar on laser scanning that you could extract the line drawings from the point cloud, and this is actually how this elevation was generation was generated, but it could equally have been done by digitising from that 2D image, and you'd have a different product – possibly more useful in some ways, but not as useful in others.
So that's me coming to the end of my little bit, although I just wanted to say a couple of words about what orthophotographs are, and I imagine a few people on this webinar will have heard of rectified photography. So it's easy to confuse rectified photography with orthophotos, but the point to remember about an orthophotograph is that everything we see in this image here is to the same scale. The distortion's due to relief but also camera tilt has been removed, so this is the equivalent of a 2D orthographic drawing of that elevation there. So you could measure a distance on the front face of that buttress, and it would be correct, as it would if you measured on the main elevation. And then just to compare it with a rectified photo, you can see from the box there that this photo's only been rectified for that main elevation, so the projecting buttresses are at a larger scale than the main elevation. So that's just something to bear in mind if you're asking for rectified photography or being told that orthophotographs are what you need.
So that's me finished talking about photogrammetry. Now it's over to Jon, who will tell you how we combine the two into a reality capture model and the advantages that that gives us.
Jon [32:14] Thanks, David. So Paul and David have talked, both of them, about the relative advantages and disadvantages of laser scanning in photogrammetry and reality capture, although as Paul mentioned, it's very popular in the survey industry at the moment. It has different meanings to different people, but the way we tend to interpret that is that it's-- we use it as hybrid modelling. So we interpret it as hybrid modelling, so a mixture of the benefits of both the laser scanning and the photogrammetry.
So we get the metric accuracy of the laser scans, and we combine them with the textual richness of aerial and terrestrial imagery. One of the important things to remember is that these data inputs are not processed separately, so we don't process just the laser scanning, just the photogrammetry, and then bring these together. This is a simultaneous registration of all of the inputs into a sort of single unified post-estimation. So the registration really is what it's all about. Now, with our laser scanning, we'll often register that into separate software and then bring the photogrammetry in afterwards.
So what this allows us to get at the end is a seamless, consistent and accurate coloured textured mesh. For generating our outputs, as David mentioned, we need a 3D model in order to get orthographic projections of that in order to remove perspective so that we can have orthoimages that can be measured off and fulfil their function. One of the advantages of mixing up the data is that we can use the photogrammetry infill areas that we can't get to with the laser scanner, and we can also enhance areas where the laser scanner data is sparse, and provide superior textures. We don't have to work with a hybrid mesh. We can apply the texture from the imagery to a mesh derived just from laser scans, or we can apply that imagery to a hybrid mesh. I'm going to talk about a couple of case studies – one at the very small end of the scale for us and one at the large end – so we'll move on here.
So the first one that I'm going to talk about is the Roman Catholic Church of Saint John, the Baptist in Rochdale. It's a grade 2 star-listed building by Hill Sandy and Norris in the early '20s. And it's got ferro-concrete dome with barrel vaults, and it's decorated interior in an early Christian Byzantine style. Now, the reason we were there was that the sanctuary is covered with this wonderful mosaic scheme, that of the early '30s by Eric Newton, and we were undertaking recording in advance of repairs to that mosaic. And there is some cracks evident in the dome and up round here and some evidence of leaching through the dome.
Now, this is a relatively small example of work that we've done. So although I say small, we're still looking at 40 gigabytes of data, all up, so we only did three scans with a terrestrial laser scanner in here to provide a metric framework, a sort of armature, if you will, for the photogrammetric, and then we took 447 images. So that 40 gigabytes for everything includes the raw scanner data, the registered scanner data, the raw images, the models that were eventually derived.
So if we look at some of these inputs and we look at what the laser scanning gives us on its own. This is an untextured mesh. We can see that we have an excellent metric record, but if we tilt this view to look from above, you can see the scanner positions and use pointer here. They are here, here and there's one just here, represented by these cubes, but they're all on the ground. So what the scanner can't see is the basis of these windows. We couldn't get at it behind the altar and where there were lights and all of that, so we have an incomplete record from the laser scanning on its own. And if we process those laser scans, those three on its own, you can see that we get a 3D model that accurately represents the dimensions of the space that we're interested in. But if we zoom in to an area of that, we can see that the imaging, in common with most laser scanners these days, isn't really a very high quality, so in terms of assessing this model and its intended use for the mosaic reconstruction artist to work with, this isn't really a terribly usable product.
So in this slide, you can see that we've augmented the laser scanning with photography. So here you can see-- Here are the positions of the photographs in addition to the three initial laser scans, and you can see that we've used a mast, a photographic mast, to get the camera up at a high level to take those infill images. And you can see here that the data behind the altar and the windows there has been infilled with the photogrammetry. This is the model textured with the photogrammetry, and I'm just going to zoom in to the same area, which was up here, to show you the difference between the photogrammetric texture and the one from the laser scanning imagery.
So if we look at that same area with the photogrammetric texture applied, you can see that it's in order of magnitude better, and although we're not zoomed right in here, you can see, I hope, that we can see all of the tesserae and where the salts are leaching through, their precise positions all over this model. And this allows for close inspection. Here, in this slide, I hope you can see that we've got a crack appearing across the mosaic there. So we can get the 3D positions of all of this sort of stuff from the model.
For the purposes of actually using this, the outputs were, as is typical for us, a set of orthoimages, which the mosaic reconstruction artist can then work on markup and make estimates of materials, all of that sort of stuff, from these. Because it was very fine detail, these were produced at about half a millimetre per pixel, so that these are very large images but which show the mosaic in a great amount of detail. If you're interested in looking at this particular model, there's a link there on the screen, and you can see it on the Historic England Sketchfab site and have a play with it, zoom in and see the sort of levels of detail that we were able to get, bearing in mind that this is a model that was presented from [indistinct] consumption, so the high-resolution imagery in the orthophoto product is of much higher resolution.
I'll move on now to another case study. This is at the other end of the scale for us. This is a complex, highly decorated structure at the Church of Saint Mary at Studley Royal in Ripon. This is a church that was designed by William Burgess, the flamboyant William Burgess, and it was commissioned by the Marquess and Marchioness of Ripon after a family tragedy in the 1870s. It's set in Studley Royal Park, which is a World Heritage site, which also includes the ruins of Fountains Abbey, and it's owned by English Heritage and managed by the National Trust, so a nice example of partnership between these organisations there.
We were asked to take a pilot geospatial survey of the church in order to derive some baseline 3D data and survey outputs for the future conservation, repair and maintenance work on the building. I should say that this goes hand in hand with some earlier work that was done back in the '90s and 2000s. So that's an illustration of the project brief. So this project involved rather more data than the previous one. We start first, as we alluded to in the previous lecture on terrestrial laser scanning, the control networks, who we have a series of survey grade GNSS measurements transferred onto a local grid with a scale factor of 1. It took 189 laser scans, all up, to record this, and we had a series of 1,242 aerial images, which required the drone and 3,200 terrestrial images. Now, when we count this all up, that's 2.7 terabytes of data, which is an awful lot to deal with.
I'll just reiterate some of the benefits of the different methods again. I think the TLS data on its own gives a great record of the structure as is visible from the ground. Unless we're raising the scanner up on an extendable tripod, which isn't possible with all scanners, we're really talking about looking at something from the ground. And in this image, you can see that the coverage on the roof here is less than it is down at the ground level, and that's why we can see the interior through that point cloud there. It's got a very tall spire, so we used a scanner which is reliable at range for that.
We're looking at, more or less, the same view but with the model here sliced in half so you can see each of these little cubes in here represents one of the scanner positions internally as well as externally, so an awful lot of scans to record the complexity of the interior of this structure. While we were there, we also used another scanner. We were interested in trialling the thermal sensor, and you'll see the image in the centre will show that in what was – well, apparently to us – a freezing church while we were recording. The heating systems do appear to be working well, and you can see that in this image. So this is the thermal data applied to the 360-degree scan data from that scanner, and that can be applied as a texture if you want thermal output as a texture.
So the photogrammetric imagery in this instance for reality capture serves two purposes. The first purpose is in filling the gaps in the scanner data. We can't get the scanner everywhere, and it's relatively easy to poke a camera into spaces that you can't get a 3D laser scanner. So the photogrammetry infills that, and it also provides the basis for the high-resolution orthoimages that we're going to produce. These are the products that the client has required. So they want orthophotographs that can be measured from.
You'll be able to see here that we have some images that are taken much higher up than others. So these are all taken from the ground in the lower part of the illustration there, and these up at the top are taken using an extendable mast with the tripod on the end of it. We clearly couldn't use a drone inside the church. You'll maybe see later why. It's absolutely full of decoration. But for the exterior, whereas we saw with the laser scanning, that the coverage was less thorough. We combined the drone imagery with the scans, and in this illustration, you can see that scans still here in this simultaneous registration, represented by the cubes. And the image is represented here by the pyramids showing the image positions.
So when we combine all of this data together and we apply textures, we end up with a 3D model. So here's a cutaway of the combined data from the interior and exterior at the church for our recording. You'll be able to see immediately from this and in the light of previous surveys, that we didn't survey inside the tower here or through the roof spaces, so this is just the highly decorated, visible interior of the church and the exterior. For data-handling purposes, we will often strip these apart and deal with the interior and the exterior separately, but in this instance, we've used both together.
Now, the model looks-- It's nice to see a 3D model, and here we are zoomed in to one part of that, showing the level of detail that we're capturing, but this is necessary. The model is necessary to produce an orthophotograph in the first place because we need an orthographic projection of the thing, and we need all of the data in there in order to accurately represent the relationships between these things in the final survey product, so we've just been looking at the north side of the church there. This is a split through the model looking at the south side cutaway here, and because we have a 3D model, we can cut it up any direction we like, so here we are. This is a slice through the east end of the nave, just looking up into the chancel for the generation of sections.
And the next slide, yes, we zoom in to part of the model representing the chancel, and this gives you some idea of the level of complexity of the structure inside, highly decorated [indistinct] surfaces, absolutely everywhere. And for this sort of surface, really, a photogrammetric method with a combination of the photogrammetry and the TLS is the only way that we're going to adequately be able to represent this sort of subject matter. The last slide from the interior of the model. This is the tomb of the Marquess and Marchioness of Ripon. These are the people who commissioned the model in the first place-- the church in the first place.
So I'll move on briefly now to products so that I can hand back to Paul. So this is the exterior roof plan. There's an orthoimage. The reflected ceiling plan, so this is looking from the floor upwards at the interior ceiling plan of the church. We have long sections here through the building. These are the interiors only. Obviously, it's not showing the relationship with the exterior in these.
And other points of interest in the church. There's a fantastic mosaic floor in the chancel. I realise that my talks have both featured mosaics now. So here's the mosaic floor in the chancel, and if we zoom in, you can see again the sort of level of detail that we're able to get there. Exterior elevations going all the way up the tower, including the weathervane. These reproduced the slightly reduced resolution of 2 millimetres per pixel. That's really, simply, about making the data handleable whilst adequately supplying the client with their requirements.
This is, I think, my last slide. It's just making the point that the traditional line drawing output is still exceedingly useful and can be derived from the orthoimages that we've reduced using the combination of TLS and photogrammetry. But they're both useful for different things. The orthoimage gives you a great deal of information about colour, texture, stone type, all of these sorts of things that the line drawing can't, whereas the line drawing does allow for an awful lot of infilling and additional data to be attached to it.
So we'll move on to my poll question now before handing over to Paul. Would this method or the outputs from it be useful to you in your work? So we have a 'Yes', 'No', 'It's not relevant to me' or 'I'm already using it'. It's a reasonably resounding 'Yes', I think there. So on that note, I'll hand back over to Paul.
Paul [51:08] Thank you very much, Jon, for that, and thank you to David beforehand, as well. That was quite an enlightening poll that we had there. If I just move on. Right, I've noticed, I've been following the chat, and there's some fairly sort of significant questions being asked, so hopefully we've got a few minutes at the end where we can go back to those. But I just thought I'd summarise because, well, I think Jon's succinctly shown, even in the two case studies which he talked about, how the combination of scanning and photogrammetry can provide an end product that will give a good indication of the condition of a surface, not just at ground level but higher level, if you sort of take adequate imagery. It also highlights the size of the data sets that you have.
But what we're finding is that the work that we're doing for our colleagues in English Heritage is helping them within their structural and condition-recording work, where traditionally they may have gone to a site armed with a pair of binoculars to try and sort of visually see some of the issues that are maybe in the structure. One of the sites we've worked at is Lincoln Bishops' Palace down in Lincolnshire, just south of the cathedral where we've used the same approaches that Jon showed for Studley Royal to help generate 3D data for English Heritage to use within their own conservational work that's going on there. So what I'm trying to say in this is that going through the approach, generating a reality capture model, and whether it's using hybrid modelling techniques or something else, can generate a base model from which a lot of other information can then be extracted, both in the detail of the structure and more visual sort of presentational components.
So just following on from that, where traditionally we would've done sort of stereophotogrammetry to capture sort of elevations, creating a 3D reality capture model of the actual structure, then enables us to slice and dice, I suppose, that model to generate the outputs that are required. So you've got, for instance, the four elevations of the Alnwick Tower at Lincoln Bishops' Palace, but then more detailed information on the actual surface. And I've been involved in surveying for over 40-- well, nearly 40 years now, and I remember when line drawings were king. That was all that was being generated. But a line drawing is missing quite a lot of information in between the lines that are actually used, and this is where these modern techniques are proving so powerful, and they capture the surface indentations, so you can use if for works documentation. But like we did at Stonehenge, where we analysed the laser scan data, you can start to generate archaeological information, as well.
Jon's highlighted the visual sort of presentation capabilities of reality capture models and photogrammetric and laser scan models, as well, and they're proving very sort of useful and popular for public viewing. And this is the one that Jon highlighted, that you can have a look at yourself, and there we've got 1,900 views of this. I must admit, I took this quite a while ago. So there is a lot of public interest in visually looking at this, and I'm hoping that some of the other sort of projects that the likes of Historic England are initiating, the Heritage Action Zones and even the High Street HAZ work that's going on, could maybe sort of utilise this technology to aid public dissemination and display.
I'm pleased Jon actually showed the original photogrammetric line drawing that, if my memory proves right, David Andrews actually photogrammetrically plotted quite a few years ago, but I'm highlighting the ability to look at two different sort of epochs of data. Admittedly, if you compare chalk with cheese, you might get a sort of different approach, but even though we've got a line drawing from the '80s, that was generated from photography, admittedly stereophotography, but if it's archived and accessible, with suitable metadata, that can be pulled out of the archive and re-used, re-processed, re-analysed. So, for instance, we could compare and contrast how that elevation has changed over time, particularly how its condition has changed, which will then provide sort of useful directions for the historic building surveyors and other people to work out how to conserve and restore that sort of structure.
And then you might remember, those that watched the previous webinar, I finished upon the application of BIM from laser scan data. We've actually sort of taken that on board, as you'll see there. We've got a date in the diary for our webinar on BIM for heritage, but reality capture uses photogrammetric and laser scan data, sort of fused together. It's still generating the basic sort of data sets that could be used within a BIM modelling or even BIM process, even. But we're now hearing more about the digital twin, and I think this is maybe something we need to explore at the future date for the webinar on 22nd June, because it's in association with the BIM for Heritage special interest group, so we should be able to have people there talking about the application of 3D data sets within BIM and digital twin work.
So that is all we wanted to sort of present to you today. We're one minute over, but I think we've still got time to potentially go back to some of the questions raised because I've seen quite a few Qs at the beginning at sort of chat points. Are you able to bring any of those back out, Alice?
Alice [58:37] Hi, Paul. I won't be able to bring them out, but I can read them out. There were so many. There were a lot, and Gary and Lizzie have been busy answering, but if there was a particular subject, I could bring that out.
Paul [58:50] There was one, in particular, just before I spoke at the end here. There was a question about, how much time do you need to budget for the likes of a survey at Saint Mary's Church?
Alice [59:07] Yep, OK. If you bear with me, I'll get that up for you. It was Florence asked – Florence Alberta, I think – 'Looking at Church of Saint Mary, what sort of budget and timescale must be allowed for?' And David started answering the question saying, 'It was done in piecemeal fashion but would probably need a good two weeks on site and six weeks in the office'. If you'd like to illuminate us further.
Paul [59:31] Well, I will delegate that to Jon because Jon has done most of the processing. Are you able to comment on that, Jon?
Jon [59:43] Yep, sorry. Just remembering to unmute myself. I mean, David's in the right ballpark there. I mean, depending on the scanner and the amount of photography, I mean, broadly speaking the more complex the structure, the more imagery, the more scans are required and the longer it takes, also to process. But I think sort of a total of about eight weeks for that is broadly on par. David's about right there. Two weeks on site and six weeks for [safe?] processing. It's difficult to judge with this one because due to Covid, we were there and then not there and there and not there throughout the whole recording process.
Alice [01:00:30] We did have a question about-- We had a couple of questions about software, and two of them, I think, that didn't quite get answered was, John [Hellart?]-Jones, 'What is the preferred software for hybrid processing for the team?'
Jon [01:00:47] We're mostly using Capturing Reality at the moment or Bentley's ContextCapture, although the latest version of Agisoft's Metashape, which is very popular, also ingests laser scans directly, so the existing users may want to have a look at that.
Alice [01:01:13] Then that links to, because obviously I'm not the best person to ask the questions, 'Which software do you guys use for the simultaneous registration of TLS and photogrammetry?', which was from Daniel Hunt, a bit of a follow-on.
Jon [01:01:28] Yeah. Capturing Reality is what we're mostly using at the moment, although Bentley's ContextCapture, as I mentioned, also does the same thing. Now, there is other software, I hasten to add. I can only speak from the experience that we have, and those are the applications that we're using.
Paul [01:01:52] Yeah, if I can just sort of put a bit of information in there. Some people may have heard of Pix4D, which basically sort of came into the market, principally aimed at sort of drone imagery approaches. It can now be used for sort of structural packages, buildings and structures as well. Then we've got-- Some of you may have heard of DJI, who are the dominant drone manufacturer. They have package called DJI Terra, which, I believe, can sort of do processing similar to what we've been talking about today. Sadly, none of this software is free. I don't think there is any sort of-- Well, Jon might be able to outline some of the free photogrammetry packages, but I'm not sure if there's any free reality capture hybrid modelling packages.
Jon [01:02:51] No, there aren't. Not as far as I'm aware outside academia. I mean, the two packages that Paul mentioned there, the DJI Terra and the Pix4D, they're not for hybrid modelling. They're for photogrammetric processing. There is free photogrammetric software out there. I'm thinking 3DF Zephyr Free is a good one, and Meshroom as well. That's another good one from AliceVision. You can find that on GitHub. And if you search for 3DF Zephyr, I'll just pop it into the chat now.
Alice [01:03:37] We've just had a question in the chat about delivering imagery. So [Steyn Govin?] says, 'How would you deliver the image to a client? Through a free 3D model or just 2D images in a PDF?'
Paul [01:03:53] I think I would answer that by saying we'd probably ask that question right at the very beginning off-- well, before you even go out to site and start surveying. Every project is different. Most clients have sort of different requirements. Some of them might just want a PDF output. Some of them might want the full 3D. Some of them might still want a 2D drawing. So it's probably a politician's response that, of sort of getting around the question, but it's a case of promoting communication between the client and the provider of the data. In terms of delivering data sets, thankfully the manufacturers seem to be cottoning onto this question, and they're providing solutions that make the delivery of the enormous file sizes that you've probably seen Jon refer to being able to view them maybe remotely or cloud-based – that sort of approach, rather than having to find a huge hard drive to store the data sets on.
However, the manufacturers, they will only take notice if we make a point of this, and so I'd highlight that, like GEO Business, ask that question of all the manufacturers that will be there to promote the easy sort of transfer and accessibility to the data sets. Do David and Jon want to add anything to that?
David [01:05:37] No. I was just going to point out there was a question earlier from Rob [Harrop?] about whether we had any guidance or whatever on how to do this, and I think I'm right in saying that we haven't actually at the moment, but it's possibly something we should be looking at.
Jon [01:05:53] Potentially yeah. I mean, it's a very fast-moving market, and I fear that by the time we've produced anything, it may already be out of date by the time it gets published. Most of these pieces of software have extensive help available online, and there's an awful lot of YouTube content for them, and I'd advise having a look at those for the sort of aspects of workflows.
Paul [01:06:21] OK, thanks very much. Can we have one more question?
Alice [01:06:26] One last question, Paul. We'll wrap up with this last one from Jonathan Goodwin. 'How do you go about archiving the survey data?' So follows on from software.
Paul [01:06:38] Well, once again, the archiving of the data should be considered right at the very beginning of the project, rather than leaving it as a bit of an afterthought once you've collected these thousands of drone images and terabytes of laser scan data. The work that we do, all of the imagery and all of the outputs are deposited with the Historic England archive in Swindon. Admittedly, they were principally and analogue-based archive, sort of archive in paper-based and film-based outputs.
They're now sort of archiving digital products as well, and if you want any further advice on that, I'd certainly approach the Historic England archive down in Swindon. It's headed up by Paul Backhouse, who used to be the line manager for our team. So he's got an understanding and a sympathy, I suppose, with the requirements and the large data sets. But that is a very, very important question, Jonathan, and I'm really pleased that you asked that because these days, because we're actually investing quite a lot of time and money in generating these data sets, it's important that they're not just used once – captured once, used many times. And if it's archived properly, and the rate of knots that the processing software is developing, there will be means of reprocessing that in future.
So my advice is raise it as a question right at the very beginning of your project and make sure that there is some thought in terms of where you're going to archive it, who's going to hold it, how accessible it is, what metadata is applied to it to maximise the benefits from it. I think we'd better leave it at that, Alice.
Alice [01:08:51] Yep, I think on that note, we will end today's session. I just want to say thank you again, Paul, David and Jon, for another great session, and I'm looking forward to the fourth instalment that you mentioned earlier. So we'll be looking forward to that, and I'd like to thank all our attendees today. There was great chat in the chatroom again, and thank you very much. Must mention Gary and Lizzie for answering questions.