Current Issue



No 3 (2025)
COMPUTER GRAFICS AND VISUALIZATION
Graphic shaders testing for use in on-board visualisation system of civil aircraft
Abstract
The software package of a modern civil aircraft operates under the control of a real-time operating system (RTOS). This technology is critical from the point of view of safety and must be certified for use. An integral part of the RTOS is the graphics component. Existing aviation applications use graphics shaders that are to be compiled before execution. But a shader compiler written in C++ cannot be certified. Therefore, we propose an approach in which the compiler is not used in the on-board software. It compiles shaders in advance, and during operation they are loaded as a binary software object. Thus, certification of the shader compiler was replaced by testing the software object created by it. We developed a hardware and software complex designed to test the compiler, independent of a specific target platform. Based on the analysis of aviation applications, a set of tests was developed that allows you to check the correctness of all shader operations used in civil aviation applications. Thus, we have found and successfully implemented a practical solution to the problem of the impossibility of certifying the shader compiler, which made it possible to include shaders in the certified software of the onboard equipment of a civil aircraft.



Study of surface representation methods based on signed distance functions
Abstract
The paper studies surface rendering methods based on ray tracing for representations based on signed distance functions. The main objects of interest were the rendering algorithm execution time, the amount of memory occupied, and the accuracy of the surface representation estimated by the render using the PSNR metric. Six different representations and four intersection search algorithms were analyzed. In all cases, a bounding volume hierarchy was used as an accelerating structure. The comparison revealed promising representations and algorithms and showed that distance functions in some cases are not inferior to polygonal models in speed, while they can win in terms of memory consumption and represent the surface with a good level of accuracy.



Reconstruction of optical properties of real scene objects from images by taking into account secondary illumination and selecting the most important points
Abstract
This paper presents a method for reconstructing the optical properties of objects in a real scene, based on a series of its images with the use of differentiable rendering. The main goal of this study is to develop an approach that enables the high-accuracy reconstruction of the optical characteristics of scene objects while minimizing the computational costs. Introduction considers the relevance of creating realistic models of virtual scenes for computer graphics, as well as their application in virtual reality, augmented reality, and animation. It is noted that, in order to achieve image realism, it is necessary to take into account the scene geometry, illumination parameters, and optical properties of objects. In this study, it is assumed that the scene geometry and light sources are known, and the main task is to reconstruct the optical properties of objects. Section 3 describes the main stages of the proposed approach. The first stage involves data preprocessing, during which the key image points characterized by high brightness and uniform distribution over scene objects are selected. This significantly reduces the amount of data required for optimization. Next, using numerical differentiation and backward ray tracing, luminance gradients are calculated based on the model parameters. The proposed algorithm takes into account both primary and secondary illumination, which improves the accuracy of reconstructing the optical characteristics of the scene. At the final stage, the parameters of the optical models are reconstructed using the ADAM method, improved with the Optuna library for automatic hyperparameter selection. Section 4 describes the experiments carried out on the Cornell Box scene. The result of reconstructing the optical properties is considered and the original and reconstructed luminances are compared. Certain limitations due to the duration of calculations and the sensitivity to data outliers are identified and discussed in detail. In Conclusions, the results are summarized and directions for further development are outlined, including the transfer of calculations to the GPU and the use of more complex models of optical properties to improve the accuracy and speed of the algorithm.



Method of geometry reconstruction from a set of RGB images using differentiable rendering and visual hull
Abstract
The use of differentiable rendering methods is an up-to-date solution to the problem of geometry reconstruction from a set of RGB images without using expensive equipment. The disadvantage of this class of methods is the possible distortions of the geometry that arise during optimization and high computational complexity. Modern differentiable rendering methods calculate and use two types of gradients: silhouette gradients and normal gradients. Most distortions arising in geometry optimization are caused by modifications of parameters associated with silhouette gradients. The paper considers the possibility of increasing the efficiency of geometry reconstruction methods based on the use of differentiable rendering by dividing the reconstruction process into two stages: initialization and optimization. The first stage of reconstruction involves the creation of a visual shell of the reconstructed object. This stage allows one to automate the process of selecting the original geometry and start the next stage under two conditions: the silhouettes of the object have already been reconstructed from all observation points and the topologies of the reconstructed and true objects are equivalent. The second stage comprises a geometry optimization cycle based on the fulfillment of the above conditions. This cycle consists of four steps: image rendering, loss function calculation, gradient calculation, and geometry optimization. Satisfying the condition of matching the contours of the original and reference geometry eliminates the need to use silhouette gradients. This solution significantly reduces the number of errors that occur during optimization, as well as reduces the computational complexity of the method by eliminating the calculation of the loss function, gradient calculation, and optimization of parameters associated with the silhouettes of objects. The testing and analysis of the results showed an increase in the accuracy of geometry reconstruction with a decrease in grid resolution and a decrease in the total running time of the method in comparison with similar methods, as well as an up to two-fold increase in the speed of optimization steps.



Method for Semantic Image Segmentation Based on the Neural Network with Gabor Filters
Abstract
The article is devoted to the use of Gabor filters to improve the efficiency of convolutional neural networks (CNN) in image analysis tasks, in particular, segmentation. The application of Gabor filters as an adaptive component in the initial layers of CNN is considered, which allows improving the extraction of texture and structural features. To achieve an optimal balance between the number of trainable parameters and accuracy, adaptive Gabor filters are proposed, which increase the number of input channels without significantly complicating the model. A comparative analysis of architectures using PSPNet for image segmentation modified with adaptive Gabor filters is carried out. Limitations on the size of filters that ensure acceptable computational costs are considered. The relevance of the approach on a dataset for image segmentation is confirmed, demonstrating an improvement in accuracy with a minimal increase in the number of parameters.



Adaptive Method for Selecting Basis Functions in Kolmogorov–Arnold Networks for Magnetic Resonance Image Enhancement
Abstract
A way to improve the quality of magnetic resonance image processing using the Kolmogorov–Arnold networks for deep feature filtering in the convolutional neural network is studied. Recently proposed Kolmogorov–Arnold networks are inspired by the representation theorem of the same name from real analysis and approximation theory. It states that every multivariate continuous function on a compact set can be represented as a superposition of continuous single-variable functions. However, further gradient descent application imposes restrictions on the inner Kolmogorov functions to be at least differentiable, that’s why, in practice, they are searched in a linear span of B-Splines or some other differentiable basis functions. In this study we propose an adaptive method of basis functions selection by the model itself during training, mitigating the rule of thumb choice of that basis functions. The method is based on the attention mechanism, successfully used in state-of-the-art transformers. The proposed approach is tested on magnetic resonance images enhancement on IXI dataset and demonstrates the best average PSNR and TV over the synthetic testing dataset. Without loss of generality, the system of basis functions included: B-splines, Chebyshev polynomials and Hermite functions.



Analyzing the influence of hyperparameters on the efficiency of OCR model for pre-reform handwritten texts
Abstract
The article considers the influence of hyperparameters on the efficiency of models of optical handwriting recognition of pre-reform period on the example of handwritten reports of governors of the Yenisei province of the XIX century. A comparative analysis of model configurations with different architectural components, including normalization modules, feature extraction blocks and predictors, is carried out. Particular attention is paid to the role of input image resolution and the size of hidden layers in achieving an optimal balance between prediction accuracy and computational cost. The results obtained allow us to identify key parameters for the development of optical character recognition systems adapted to historical texts with non-standard orthography and complex structure. Prospects for further research include evaluating synthetic methods for extending training data and analyzing alternative architectures such as transformers.



Research on Methods for Traversing Two-Level BVH Trees on Graphics Processors
Abstract
A key part of the most common ray tracing methods is the traversal/search for intersections with a hierarchical structure – BVH, which describes the scene geometry. This paper presents a comparative analysis of the performance of several BVH tree traversal methods on stationary and mobile graphics processors. We investigated BVH trees with varying depths and numbers of child nodes, implemented several stack-based traversal algorithms, and two different stackless traversal algorithms; we proposed our own stackless traversal variant, which is more performant than existing ones in some cases. We proposed our own compression method for BVH trees with 2 nodes, losing no more than 15% performance. We identified a common problem that occurs in almost all algorithms when implemented on graphics processors. We believe that our analysis will help developers of ray tracing hardware accelerators create more economical hardware solutions, not limited solely to ray tracing. More specifically, our experimental results suggest that up to a 5x speedup can be achieved by changing the L2 cache mechanism, and this has likely already been implemented in stationary GPUs with hardware-accelerated ray tracing, not only within the ray tracing mechanism itself but also in a more general context.



Multiobject visualization of vast forests in virtual environment systems
Abstract
This paper discusses the task of rendering vast woodlands in virtual environment systems using point clouds and hardware-accelerated ray tracing. A new approach is proposed, which represents the forest area as a multiobject comprising point cloud of a reference tree and a set of distinctive features of its instances. A developed method for deploying such a multiobject into a virtual forest on the ray tracing pipeline is described, which includes constructing bounding boxes of the reference tree, specifying geometric and color transformations for tree instances, and synthesizing images of these instances. Based on the proposed method, a software implementation (C++, GLSL, Vulkan) was created and tested on a number of detailed point clouds of real trees (deciduous and evergreen). The results of the testing confirmed the possibility to synthesize images of unique vast woodlands (of several million trees) in real time both from a bird's-eye view and from a pedestrian's point of view. The proposed solution has a wide range of applications: virtual environment systems, video simulators, scientific visualization, geoinformation systems, educational applications, etc.



A method for deferred rendering of a set of dynamic point light sources of voxelized scenes in real time
Abstract
With the increase in GPU performance, it has become possible to visualize complex physical phenomena in real time using global lighting algorithms. One such approach is the use of virtual point light sources, in which the realism of images depends on the number of light sources. However, for a large number of light sources, early algorithms required the creation of a large number of shadow maps to check visibility under virtual spot lighting. Therefore, it was problematic to achieve high-quality real-time images until new methods were developed. The purpose of the presented work is to create a delayed rendering method for thousands of point light sources based on voxelized scenes in real time. In the first pass, a geometric sparse voxel octal tree is calculated. A geometric buffer is used that stores information about location, normal, and materials for direct and indirect lighting. Then, reflective shadow maps are generated and selected by significance, so as not to check each texel. Direct illumination is calculated using shadow maps, and for indirect illumination, a ray marching algorithm is used to check the visibility of point light sources. Alternating sampling is used to speed up calculations. As a result, using the proposed method, it is possible to create realistic images of scenes with global illumination in real time. Using a GPU, you can calculate thousands of point light sources in real time, and visualize fully dynamic scenes. However, glossy surfaces require a greater number of point light sources so that images without artifacts accurately reproduce the appearance of the material.


