Creating an Infinite Circular Gallery using WebGL with OGL and GLSL Shaders

In this tutorial we’ll implement an infinite circular gallery using WebGL with OGL based on the website Lions Good News 2020 made by SHIFTBRAIN inc.

Most of the steps of this tutorial can be also reproduced in other WebGL libraries such as Three.js or Babylon.js with the correct adaptations.

With that being said, let’s start coding!

Creating our OGL 3D environment

The first step of any WebGL tutorial is making sure that you’re setting up all the rendering logic required to create a 3D environment.

Usually what’s required is: a camera, a scene and a renderer that is going to output everything into a canvas element. Then inside a requestAnimationFrame loop, you’ll use your camera to render a scene inside the renderer. So here’s our initial snippet:

import { Renderer, Camera, Transform } from 'ogl' export default class App { constructor () { this.createRenderer() this.createCamera() this.createScene() this.onResize() this.update() this.addEventListeners() } createRenderer () { this.renderer = new Renderer() =, 0.79215686274, 0.74117647058, 1) document.body.appendChild( } createCamera () { = new Camera( = 45 = 20 } createScene () { this.scene = new Transform() } /** * Events. */ onTouchDown (event) { } onTouchMove (event) { } onTouchUp (event) { } onWheel (event) { } /** * Resize. */ onResize () { this.screen = { height: window.innerHeight, width: window.innerWidth } this.renderer.setSize(this.screen.width, this.screen.height){ aspect: / }) const fov = * (Math.PI / 180) const height = 2 * Math.tan(fov / 2) * const width = height * this.viewport = { height, width } } /** * Update. */ update () { this.renderer.render({ scene: this.scene, camera: }) window.requestAnimationFrame(this.update.bind(this)) } /** * Listeners. */ addEventListeners () { window.addEventListener('resize', this.onResize.bind(this)) window.addEventListener('mousewheel', this.onWheel.bind(this)) window.addEventListener('wheel', this.onWheel.bind(this)) window.addEventListener('mousedown', this.onTouchDown.bind(this)) window.addEventListener('mousemove', this.onTouchMove.bind(this)) window.addEventListener('mouseup', this.onTouchUp.bind(this)) window.addEventListener('touchstart', this.onTouchDown.bind(this)) window.addEventListener('touchmove', this.onTouchMove.bind(this)) window.addEventListener('touchend', this.onTouchUp.bind(this)) }
} new App()

Explaining the App class setup

In our createRenderer method, we’re initializing a renderer with a fixed color background by calling Then we’re storing our GL context ( reference in the variable and appending our <canvas> ( element to our document.body.

In our createCamera method, we’re creating a new Camera() instance and setting some of its attributes: fov and its z position. The FOV is the field of view of your camera, what you’re able to see from it. And the z is the position of your camera in the z axis.

In our createScene method, we’re using the Transform class, that is the representation of a new scene that is going to contain all our planes that represent our images in the WebGL environment.

The onResize method is the most important part of our initial setup. It’s responsible for three different things:

  1. Making sure we’re always resizing the <canvas> element with the correct viewport sizes.
  2. Updating our perspective dividing the width and height of the viewport.
  3. Storing in the variable this.viewport, the value representations that will help to transform pixels into 3D environment sizes by using the fov from the camera.

The approach of using the camera.fov to transform pixels in 3D environment sizes is an approach used very often in multiple WebGL implementations. Basically what it does is making sure that if we do something like: this.mesh.scale.x = this.viewport.width; it’s going to make our mesh fit the entire screen width, behaving like width: 100%, but in 3D space.

And finally in our update, we’re setting our requestAnimationFrame loop and making sure we keep rendering our scene.

You’ll also notice that we already included the wheel, touchstart, touchmove, touchend, mousedown, mousemove and mouseup events, they will be used to include user interactions with our application.

Creating a reusable geometry instance

It’s a good practice to keep memory usage low by always reusing the same geometry reference no matter what WebGL library you’re using. To represent all our images, we’re going to use a Plane geometry, so let’s create a new method and store this new geometry inside the this.planeGeometry variable.

import { Renderer, Camera, Transform, Plane } from 'ogl' createGeometry () { this.planeGeometry = new Plane(, { heightSegments: 50, widthSegments: 100 })

The reason for including heightSegments and widthSegments with these values is being able to manipulate vertices in a way to make the Plane behave like a paper in the air.

Importing our images using Webpack

Now it’s time to import our images into our application. Since we’re using Webpack in this tutorial, all we need to do to request our images is using import:

import Image1 from 'images/1.jpg'
import Image2 from 'images/2.jpg'
import Image3 from 'images/3.jpg'
import Image4 from 'images/4.jpg'
import Image5 from 'images/5.jpg'
import Image6 from 'images/6.jpg'
import Image7 from 'images/7.jpg'
import Image8 from 'images/8.jpg'
import Image9 from 'images/9.jpg'
import Image10 from 'images/10.jpg'
import Image11 from 'images/11.jpg'
import Image12 from 'images/12.jpg'

Now let’s create our array of images that we want to use in our infinite slider, so we’re basically going to call the variables above inside a createMedia method, and use .map to create new instances of the Media class (new Media()), which is going to be our representation of each image of the gallery.

createMedias () { this.mediasImages = [ { image: Image1, text: 'New Synagogue' }, { image: Image2, text: 'Paro Taktsang' }, { image: Image3, text: 'Petra' }, { image: Image4, text: 'Gooderham Building' }, { image: Image5, text: 'Catherine Palace' }, { image: Image6, text: 'Sheikh Zayed Mosque' }, { image: Image7, text: 'Madonna Corona' }, { image: Image8, text: 'Plaza de Espana' }, { image: Image9, text: 'Saint Martin' }, { image: Image10, text: 'Tugela Falls' }, { image: Image11, text: 'Sintra-Cascais' }, { image: Image12, text: 'The Prophet\'s Mosque' }, { image: Image1, text: 'New Synagogue' }, { image: Image2, text: 'Paro Taktsang' }, { image: Image3, text: 'Petra' }, { image: Image4, text: 'Gooderham Building' }, { image: Image5, text: 'Catherine Palace' }, { image: Image6, text: 'Sheikh Zayed Mosque' }, { image: Image7, text: 'Madonna Corona' }, { image: Image8, text: 'Plaza de Espana' }, { image: Image9, text: 'Saint Martin' }, { image: Image10, text: 'Tugela Falls' }, { image: Image11, text: 'Sintra-Cascais' }, { image: Image12, text: 'The Prophet\'s Mosque' }, ] this.medias ={ image, text }, index) => { const media = new Media({ geometry: this.planeGeometry, gl:, image, index, length: this.mediasImages.length, scene: this.scene, screen: this.screen, text, viewport: this.viewport }) return media })

As you’ve probably noticed, we’re passing a bunch of arguments to our Media class, I’ll explain why they’re needed when we start setting up the class in the next section. We’re also duplicating the amount of images to avoid any issues of not having enough images when making our gallery infinite on very wide screens.

It’s important to also include some specific calls in the onResize and update methods for our this.medias array, because we want the images to be responsive:

onResize () { if (this.medias) { this.medias.forEach(media => media.onResize({ screen: this.screen, viewport: this.viewport })) }

And also do some real-time manipulations inside the requestAnimationFrame:

update () { this.medias.forEach(media => media.update(this.scroll, this.direction))

Setting up the Media class

Our Media class is going to use Mesh, Program and Texture classes from OGL to create a 3D plane and attribute a texture to it, which in our case is going to be our images.

In our constructor, we’re going to store all variables that we need and that were passed in the new Media() initialization from index.js:

export default class { constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) { this.geometry = geometry = gl this.image = image this.index = index this.length = length this.scene = scene this.screen = screen this.text = text this.viewport = viewport this.createShader() this.createMesh() this.onResize() }

Explaining a few of these arguments, basically the geometry is the geometry we’re going to apply to our Mesh class. The is our GL context, useful to keep doing WebGL manipulations inside the class. The this.image is the URL of the image. Both of the this.index and this.length will be used to do positions calculations of the mesh. The this.scene is the group which we’re going to append our mesh to. And finally this.screen and this.viewport are the sizes of the viewport and environment.

Now it’s time to create the shader that is going to be applied to our Mesh in the createShader method, in OGL shaders are created with Program:

createShader () { const texture = new Texture(, { generateMipmaps: false }) this.program = new Program(, { fragment, vertex, uniforms: { tMap: { value: texture }, uPlaneSizes: { value: [0, 0] }, uImageSizes: { value: [0, 0] }, uViewportSizes: { value: [this.viewport.width, this.viewport.height] } }, transparent: true }) const image = new Image() image.src = this.image image.onload = _ => { texture.image = image this.program.uniforms.uImageSizes.value = [image.naturalWidth, image.naturalHeight] }

In the snippet above, we’re basically creating a new Texture() instance, making sure to use generateMipmaps as false so it preserves the quality of the image. Then creating a new Program() instance, which represents a shader composed of fragment and vertex with some uniforms used to manipulate it.

We’re also creating a new Image() instance to preload the image before applying it to the texture.image. And also updating the this.program.uniforms.uImageSizes.value because it’s going to be used to preserve the aspect ratio of our images.

It’s important to create our fragment and vertex shaders now, so we’re going to create two new files: fragment.glsl and vertex.glsl:

precision highp float; uniform vec2 uImageSizes;
uniform vec2 uPlaneSizes;
uniform sampler2D tMap; varying vec2 vUv; void main() { vec2 ratio = vec2( min((uPlaneSizes.x / uPlaneSizes.y) / (uImageSizes.x / uImageSizes.y), 1.0), min((uPlaneSizes.y / uPlaneSizes.x) / (uImageSizes.y / uImageSizes.x), 1.0) ); vec2 uv = vec2( vUv.x * ratio.x + (1.0 - ratio.x) * 0.5, vUv.y * ratio.y + (1.0 - ratio.y) * 0.5 ); gl_FragColor.rgb = texture2D(tMap, uv).rgb; gl_FragColor.a = 1.0;
precision highp float; attribute vec3 position;
attribute vec2 uv; uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix; varying vec2 vUv; void main() { vUv = uv; vec3 p = position; gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1.0);

And require them in the start of Media.js using Webpack:

import fragment from './fragment.glsl'
import vertex from './vertex.glsl'

Now let’s create our new Mesh() instance in the createMesh method merging together the geometry and shader.

createMesh () { this.plane = new Mesh(, { geometry: this.geometry, program: this.program }) this.plane.setParent(this.scene)

The Mesh instance is stored in the this.plane variable to be reused in the onResize and update methods, then appended as a child of the this.scene group.

The only thing we have now on the screen is a simple square with our image:

Let’s now implement the onResize method and make sure we’re rendering rectangles:

onResize ({ screen, viewport } = {}) { if (screen) { this.screen = screen } if (viewport) { this.viewport = viewport this.plane.program.uniforms.uViewportSizes.value = [this.viewport.width, this.viewport.height] } this.scale = this.screen.height / 1500 this.plane.scale.y = this.viewport.height * (900 * this.scale) / this.screen.height this.plane.scale.x = this.viewport.width * (700 * this.scale) / this.screen.width this.plane.program.uniforms.uPlaneSizes.value = [this.plane.scale.x, this.plane.scale.y]

The scale.y and scale.x calls are responsible for scaling our element properly, transforming our previous square into a rectangle of 700×900 sizes based on the scale.

And the uViewportSizes and uPlaneSizes uniform value updates makes the image display correctly. That’s basically what makes the image have the background-size: cover; behavior, but in WebGL environment.

Now we need to position all the rectangles in the x axis, making sure we have a small gap between them. To achieve that, we’re going to use this.plane.scale.x, this.padding and this.index variables to do the calculation required to move them:

this.padding = 2 this.width = this.plane.scale.x + this.padding
this.widthTotal = this.width * this.length this.x = this.width * this.index

And in the update method, we’re going to set the this.plane.position to these variables:

update () { this.plane.position.x = this.x

Now you’ve setup all the initial code of Media, which results in the following image:

Including infinite scrolling logic

Now it’s time to make it interesting and include scrolling logic on it, so we have at least an infinite gallery in place when the user scrolls through your page. In our index.js, we’ll do the following updates.

First, let’s include a new object called this.scroll in our constructor with all variables that we will manipulate to do the smooth scrolling:

this.scroll = { ease: 0.05, current: 0, target: 0, last: 0

Now let’s add the touch and wheel events, so when the user interacts with the canvas, he will be able to move stuff:

onTouchDown (event) { this.isDown = true this.scroll.position = this.scroll.current this.start = event.touches ? event.touches[0].clientX : event.clientX
} onTouchMove (event) { if (!this.isDown) return const x = event.touches ? event.touches[0].clientX : event.clientX const distance = (this.start - x) * 0.01 = this.scroll.position + distance
} onTouchUp (event) { this.isDown = false

Then, we’ll include the NormalizeWheel library in onWheel event, so this way we have the same value on all browsers when the user scrolls:

onWheel (event) { const normalized = NormalizeWheel(event) const speed = normalized.pixelY += speed * 0.005

In our update method with requestAnimationFrame, we’ll lerp the this.scroll.current with to make it smooth, then we’ll pass it to all medias:

update () { this.scroll.current = lerp(this.scroll.current,, this.scroll.ease) if (this.medias) { this.medias.forEach(media => media.update(this.scroll)) } this.scroll.last = this.scroll.current window.requestAnimationFrame(this.update.bind(this))

And now we just update our Media file to use the current scroll value to move the Mesh to the new scroll position:

update (scroll) { this.plane.position.x = this.x - scroll.current * 0.1

This is the current result we have:

As you’ve noticed, it’s not infinite yet, to achieve that, we need to include some extra code. The first step is including the direction of the scroll in the update method from index.js:

update () { this.scroll.current = lerp(this.scroll.current,, this.scroll.ease) if (this.scroll.current > this.scroll.last) { this.direction = 'right' } else { this.direction = 'left' } if (this.medias) { this.medias.forEach(media => media.update(this.scroll, this.direction)) } this.scroll.last = this.scroll.current

Now in the Media class, you need to include a variable called this.extra in the constructor, and do some manipulations on it to sum the total width of the gallery, when the element is outside of the screen.

constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) { this.extra = 0
} update (scroll) { this.plane.position.x = this.x - scroll.current * 0.1 - this.extra const planeOffset = this.plane.scale.x / 2 const viewportOffset = this.viewport.width this.isBefore = this.plane.position.x + planeOffset < -viewportOffset this.isAfter = this.plane.position.x - planeOffset > viewportOffset if (direction === 'right' && this.isBefore) { this.extra -= this.widthTotal this.isBefore = false this.isAfter = false } if (direction === 'left' && this.isAfter) { this.extra += this.widthTotal this.isBefore = false this.isAfter = false }

That’s it, now we have the infinite scrolling gallery, pretty cool right?

Including circular rotation

Now it’s time to include the special flavor of the tutorial, which is making the infinite scrolling also have the circular rotation. To achieve it, we’ll use Math.cos to change the this.mesh.position.y accordingly to the rotation of the element. And map technique to change the this.mesh.rotation.z based on the element position in the z axis.

First, let’s make it rotate in a smooth way based on the position. The map method is basically a way to serve values based on another specific range, let’s say for example you use map(0.5, 0, 1, -500, 500);, it’s going to return 0 because it’s the middle between -500 and 500. Basically the first argument controls the output of min2 and max2:

export function map (num, min1, max1, min2, max2, round = false) { const num1 = (num - min1) / (max1 - min1) const num2 = (num1 * (max2 - min2)) + min2 if (round) return Math.round(num2) return num2

Let’s see it in action by including the following like of code in the Media class:

this.plane.rotation.z = map(this.plane.position.x, -this.widthTotal, this.widthTotal, Math.PI, -Math.PI)

And that’s the result we get so far. It’s already pretty cool because you’re able to see the rotation changing based on the plane position:

Now it’s time to make it look circular. Let’s use Math.cos, we just need to do a simple calculation with this.plane.position.x / this.widthTotal, this way we’ll have a cos that will return a normalized value that we can just tweak multiplying by how much we want to change the y position of the element:

this.plane.position.y = Math.cos((this.plane.position.x / this.widthTotal) * Math.PI) * 75 - 75

Simple as that, we’re just moving it by 75 in environment space based in the position, this gives us the following result, which is exactly what we wanted to achieve:

Snapping to the closest item

Now let’s include a simple snapping to the closest item when the user stops scrolling. To achieve that, we need to create a new method called onCheck, it’s going to do some calculations when the user releases the scrolling:

onCheck () { const { width } = this.medias[0] const itemIndex = Math.round(Math.abs( / width) const item = width * itemIndex if ( < 0) { = -item } else { = item }

The result of the item variable is always the center of one of the elements in the gallery, which snaps the user to the corresponding position.

For wheel events, we need a debounced version of it called onCheckDebounce that we can include in the constructor by including lodash/debounce:

import debounce from 'lodash/debounce' constructor ({ camera, color, gl, renderer, scene, screen, url, viewport }) { this.onCheckDebounce = debounce(this.onCheck, 200)
} onWheel (event) { this.onCheckDebounce()

Now the gallery is always being snapped to the correct entry:

Writing paper shaders

Finally let’s include the most interesting part of our project, which is enhancing the shaders a little bit by taking into account the scroll velocity and distorting the vertices of our meshes.

The first step is to include two new uniforms in our this.program declaration from Media class: uSpeed and uTime.

this.program = new Program(, { fragment, vertex, uniforms: { tMap: { value: texture }, uPlaneSizes: { value: [0, 0] }, uImageSizes: { value: [0, 0] }, uViewportSizes: { value: [this.viewport.width, this.viewport.height] }, uSpeed: { value: 0 }, uTime: { value: 0 } }, transparent: true

Now let’s write some shader code to make our images bend and distort in a very cool way. In your vertex.glsl file, you should include the new uniforms: uniform float uTime and uniform float uSpeed:

uniform float uTime;
uniform float uSpeed;

Then inside the void main() of your shader, you can now manipulate the vertices in the z axis using these two values plus the position stored variable in p. We’re going to use a sin and cos to bend our vertices like it’s a plane, so all you need to do is including the following line:

p.z = (sin(p.x * 4.0 + uTime) * 1.5 + cos(p.y * 2.0 + uTime) * 1.5);

Also don’t forget to include uTime increment in the update() method from Media:

this.program.uniforms.uTime.value += 0.04

Just this line of code outputs a pretty cool paper effect animation:

Including text in WebGL using MSDF fonts

Now let’s include our text inside the WebGL, to achieve that, we’re going to use msdf-bmfont to generate our files, you can see how to do that in this GitHub repository, but basically it’s installing the npm dependency and running the command below:

msdf-bmfont -f json -m 1024,1024 -d 4 --pot --smart-size freight.otf

After running it, you should now have a .png and .json file in the same directory, these are the files that we’re going to use on our MSDF implementation in OGL.

Now let’s create a new file called Title and start setting up the code of it. First let’s create our class and use import in the shaders and the files:

import AutoBind from 'auto-bind'
import { Color, Geometry, Mesh, Program, Text, Texture } from 'ogl' import fragment from 'shaders/text-fragment.glsl'
import vertex from 'shaders/text-vertex.glsl' import font from 'fonts/freight.json'
import src from 'fonts/freight.png' export default class { constructor ({ gl, plane, renderer, text }) { AutoBind(this) = gl this.plane = plane this.renderer = renderer this.text = text this.createShader() this.createMesh() }

Now it’s time to start setting up MSDF implementation code inside the createShader() method. The first thing we’re going to do is create a new Texture() instance and load the fonts/freight.png one stored in src:

createShader () { const texture = new Texture(, { generateMipmaps: false }) const textureImage = new Image() textureImage.src = src textureImage.onload = _ => texture.image = textureImage

Then we need to start setting up the fragment shader we’re going to use to render the MSDF text, because MSDF can be optimized in WebGL 2.0, we’re going to use this.renderer.isWebgl2 from OGL to check if it’s supported or not and declare different shaders based on it, so we’ll have vertex300, fragment300, vertex100 and fragment100:

createShader () { const vertex100 = `${vertex}` const fragment100 = ` #extension GL_OES_standard_derivatives : enable precision highp float; ${fragment} ` const vertex300 = `#version 300 es #define attribute in #define varying out ${vertex} ` const fragment300 = `#version 300 es precision highp float; #define varying in #define texture2D texture #define gl_FragColor FragColor out vec4 FragColor; ${fragment} ` let fragmentShader = fragment100 let vertexShader = vertex100 if (this.renderer.isWebgl2) { fragmentShader = fragment300 vertexShader = vertex300 } this.program = new Program(, { cullFace: null, depthTest: false, depthWrite: false, transparent: true, fragment: fragmentShader, vertex: vertexShader, uniforms: { uColor: { value: new Color('#545050') }, tMap: { value: texture } } })

As you’ve probably noticed, we’re prepending fragment and vertex with different setup based on the renderer WebGL version, let’s create also our text-fragment.glsl and text-vertex.glsl files:

uniform vec3 uColor;
uniform sampler2D tMap; varying vec2 vUv; void main() { vec3 color = texture2D(tMap, vUv).rgb; float signed = max(min(color.r, color.g), min(max(color.r, color.g), color.b)) - 0.5; float d = fwidth(signed); float alpha = smoothstep(-d, d, signed); if (alpha < 0.02) discard; gl_FragColor = vec4(uColor, alpha);
attribute vec2 uv;
attribute vec3 position; uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix; varying vec2 vUv; void main() { vUv = uv; gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);

Finally let’s create the geometry of our MSDF font implementation in the createMesh() method, for that we’ll use the new Text() instance from OGL, and then apply the buffers generated from it to the new Geometry() instance:

createMesh () { const text = new Text({ align: 'center', font, letterSpacing: -0.05, size: 0.08, text: this.text, wordSpacing: 0, }) const geometry = new Geometry(, { position: { size: 3, data: text.buffers.position }, uv: { size: 2, data: text.buffers.uv }, id: { size: 1, data: }, index: { data: text.buffers.index } }) geometry.computeBoundingBox() this.mesh = new Mesh(, { geometry, program: this.program }) this.mesh.position.y = -this.plane.scale.y * 0.5 - 0.085 this.mesh.setParent(this.plane)

Now let’s apply our brand new titles in the Media class, we’re going to create a new method called createTilte() and apply it to the constructor:

constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) { this.createTitle()
} createTitle () { this.title = new Title({ gl:, plane: this.plane, renderer: this.renderer, text: this.text, })

Simple as that, we’re just including a new Title() instance inside our Media class, this will output the following result for you:

One of the best things about rendering text inside WebGL is reducing the overload of calculations required by the browser when animating the text to the right position. If you go with the DOM approach, you’ll usually have a little bit of performance impact because browsers will need to recalculate DOM sections when translating the text properly and checking composite layers.

For the purpose of this demo, we also included a new Number() class implementation that will be responsible for showing the current index that the user is seeing. You can check how it’s implemented in source code, but it’s basically the same implementation of the Title class with the only difference of it loading a different font style:

Including background blocks

To finalize the demo, let’s implement some blocks in the background that will be moving in x and y axis to enhance the depth effect of it:

To achieve this effect we’re going to create a new Background class and inside of it we’ll initialize some new Plane() geometries in a new Mesh() with random sizes and positions by changing the scale and position of the meshes of the for loop:

import { Color, Mesh, Plane, Program } from 'ogl' import fragment from 'shaders/background-fragment.glsl'
import vertex from 'shaders/background-vertex.glsl' import { random } from 'utils/math' export default class { constructor ({ gl, scene, viewport }) { = gl this.scene = scene this.viewport = viewport const geometry = new Plane( const program = new Program(, { vertex, fragment, uniforms: { uColor: { value: new Color('#c4c3b6') } }, transparent: true }) this.meshes = [] for (let i = 0; i < 50; i++) { let mesh = new Mesh(, { geometry, program, }) const scale = random(0.75, 1) mesh.scale.x = 1.6 * scale mesh.scale.y = 0.9 * scale mesh.speed = random(0.75, 1) mesh.xExtra = 0 mesh.x = mesh.position.x = random(-this.viewport.width * 0.5, this.viewport.width * 0.5) mesh.y = mesh.position.y = random(-this.viewport.height * 0.5, this.viewport.height * 0.5) this.meshes.push(mesh) this.scene.addChild(mesh) } }

Then after that we just need to apply endless scrolling logic on them as well, following the same directional validation we have in the Media class:

update (scroll, direction) { this.meshes.forEach(mesh => { mesh.position.x = mesh.x - scroll.current * mesh.speed - mesh.xExtra const viewportOffset = this.viewport.width * 0.5 const widthTotal = this.viewport.width + mesh.scale.x mesh.isBefore = mesh.position.x < -viewportOffset mesh.isAfter = mesh.position.x > viewportOffset if (direction === 'right' && mesh.isBefore) { mesh.xExtra -= widthTotal mesh.isBefore = false mesh.isAfter = false } if (direction === 'left' && mesh.isAfter) { mesh.xExtra += widthTotal mesh.isBefore = false mesh.isAfter = false } mesh.position.y += 0.05 * mesh.speed if (mesh.position.y > this.viewport.height * 0.5 + mesh.scale.y) { mesh.position.y -= this.viewport.height + mesh.scale.y } })

That’s simple as that, now we have the blocks in the background as well, finalizing the code of our demo!

I hope this tutorial was useful to you and don’t forget to comment if you have any questions!

The post Creating an Infinite Circular Gallery using WebGL with OGL and GLSL Shaders appeared first on Codrops.