Space Syntax

dates back to the 70’s and is used for analysing connectivity, opennes, visibility and flow of spatial configurations. Popular applications are e.g. floor-plans of museums, city-maps, etc…

Such 2D-diagrams can be generated by computing the “visibility” of each point on a regular grid (image pixels). Depending on the exact purpose, visibility can mean, the total area directly visible of that point in all directions (360°).

2D-Diagrams make sense in multiples ways. They are easy and very quick to read for us. For most spatial configurations this is a reasonable way to visualize Space Syntax since it reflects our perception of space. Although we live in a 3-dimensional world, our movements are limited to 2 dimensions. We move in the XY-plane and rotate around our Z-axis, most of the day. Which makes the Horizontal surroundings the most dominant one to our perception of space and therefore 2D-diagrams viable simplifications. Adding a 3rd dimension to the analysis won’t add much further information to the diagram, at least for most flat floor-plans.

 

 

But already simple holes in walls, stairs, any kinds of obstacles, lofts, etc. can be difficult or even impossible for 2D solvers to handle.

So there can be configurations where adding a 3rd dimension can add important information and result in a completely different diagram, or may even be the only possibility to do a meaningful analysis at all. E.g. Theaters (audience as well as stage-setup), Stadiums, Halls, Atriums, City-Blocks etc… spatial configurations where vertical space and its connectivity/visibility is important.

 

 



In the video, Global Illumination and the Spatial Analysis (Space Syntax and Nearest Neighbors) is computed on the fly in the first few milliseconds of this video.

 

Algorithm for 3D spatial analysis

Spatial visibility analysis is in fact the same thing as ambient occlusion, a common tool used in computer graphics. Computing ambient occlusion on a global level means, to include the whole scene in the process. A simple, but probably very costly way (depending on the resolution of the 3D grid) is to render depthmaps in all 6 directions at each point on the grid. Another method would be to utilize raymarching, testing different directions, the more directions the better the result. Similar to the later, but way more efficient is to use the hardware rasterizer and compute the visibility for all points in parallel via global lines.

It was an easy task to add this kind of spatial analyis to my radiosity renderer i created a while back, because i could use the exact same technique. There is almost zero overhead, which makes it possible to compute the Global Illumination and the Spatial Analysis at the same time.

 

The new module contains 3 features:

  1. Space Syntax Voxels
    • Visibility computed for each point on a regular 3-dimensional grid. Integrating the surrounding 3D-scene over the unitsphere.
  2. Space Syntax Pixels
    • Visibility computed for each pixel in the Texture Atlas in the UV-domain of the triangle. Integrating the surrounding 3D-scene over the hemisphere, for both sides, front and back, of the triangle to get results for geometrys that have no thickness.
  3. Nearest Neighbor Voxels
    • NN computed for each point on a regular 3-dimensional grid. Probably there are more efficient methods than my stochastic method, and definitely more accurate ones. But it comes for free, without any processing overhead.

 

The term “Visibility” is computed as:

  • the averaged
  • surface-normal weighted (kd … diffuse reflection coefficient)
  • distance

to the nearest ray-intersection.

The kd-factor is of course only applied for surface sample points (pixels). The voxels “Visibility” is the true averaged distance.

The “Visibility” term correlates directly to the volume of the corresponding 3-dimensional isovist:

Volume (Voxel-Isovist) = 4/3 * PI * VisbilityFactor³
Volume (Pixel-Isovist) = 4/3 * PI * (VisbilityFactor/2)³
Variations to Simulate 2.5D

To account for the 3D-Analysis not always beeing very well suited for the human “2D perception” in certain configurations (as noted above), the 3D-solver can simply be modified to sample completely horizontally, or close to that. This gives indeed very interesting results. Typical 2D solvers could only reproduce this (somehow inefficiently) by creating horizontal slices of the building and create a diagram for each of them.

Of course any other kind of sampling strategy can be applied too.

 

 

Results

Image [1] shows the default GI output by the renderer, using a HDR-Image for lightning.

In [2] the visibility at the objects surface is visualized, … looks very similar to ambient occlusion, but this one is a bit different. The brighness indicates the space (or volume) directly visible at the point … a 3 dimensional isovist. Empty space, like the sky is not considered.

The pseudo-color image [3] reveals the most exposed surfaces just a lot better.

While [2] and [3] is computed for the objects surface and stored in light-maps, [4] is done for each Voxel (just a regular 3D grid) in a separate pass.

Scene:

  • 2 x 2.923.193 pixel, storing GI (irradiance), Visibility … and other stuff
  • 1.270.016 voxel, storing Visibility, Nearest Neighbor

all of those are updated each pass.

Closed Environment

The demos shown in this post are all “open” scenes, where a lot of triangles face the sky, which adds just zero to the visibility function. Instead zero, a constant factor could be used and the result would be some some very nice Ambient Occlusion. In the end it depends on the purpose of the specific visualization if the sky should be included as a constant factor or not.

However, in closed environments (indoor) all sample directions produce geometry intersections at some finite distance.

Visibility – Pixels:

Visibility – Voxels:

Some Case Studies

It was quite interesting to test different models (3dwarehouse.sketchup.com, archive3d.net, artist-3d.com, tf3dm.com, k3d-Surf and my own) and compare results.

Large City-Plans are challenging for the 3d-Voxels because a rather high resolution is required, also the larger the extent of the model, the less important the 3D Visualisation becomes and probably a 2D solver would produce better diagrams. But when zoomed in it gets interesting. Exposed facades are eye catching on the pseudo-color images, … and probably when compared to IRL there might be a chance to find large advertisement banners in these spots. Same for sport-stadiums.