andreas.alex.vasilakis[at]gmail.com |
DBLP | Scholar | CV | abasilak | abasilak | abasilak |
*No triangles were harmed during the implementation of my research work.
My name is Andreas-Alexandros Vasilakis and I was born on October 12, 1983, in Corfu, Greece. I received my PhD on the field of Computer Graphics from the Department of Computer Science & Engineering of the University of Ioannina in Greece, under the supervision of Prof. Ioannis Fudos. My PhD studies were supported by a scholarship from the Heraclitus II grant through the operational programme "Education and Lifelong Learning" through the European Social Fund, 2010-2013. I have also received BSc and MSc Degrees from the same institution in 2006 and 2008, respectively. I am currently a postdoctoral fellow in Athens University of Economics and Business as well as adjunct lecturer in the Department of Computer Science and Engineering of Ioannina University. My research interests include interactive graphics, geometry processing techniques and rendering algorithms.
Recently, I cofounded Phasmatic, a company which pushes the boundaries of photorealistic 3D graphics on the web.
Correctly compositing transparent fragments is an important and long-standing open problem in real-time computer graphics. Multifragment rendering is considered a key solution to providing high-quality order-independent transparency at interactive frame rates. To achieve that, practical implementations severely constrain the overall memory budget by adopting bounded fragment configurations such as the k-buffer. Relying on an iterative trial-and-error procedure, however, where the value of k is manually configured per case scenario, can inevitably result in bad memory utilization and view-dependent artifacts. To this end, we introduce a novel intelligent k-buffer approach that performs a non-uniform per pixel fragment allocation guided by a deep learning prediction mechanism. A hybrid scheme is further employed to facilitate the approximate blending of non-significant (remaining) fragments and thus contribute to a better overall final color estimation. An experimental evaluation substantiates that our method outperforms previous approaches when evaluating transparency in various high depth-complexity scenes.
This chapter introduces us to a new API built for doing ray tracing in a web browser, expanding hardware accelerated ray tracing access to web applications.
WebRays is currently powering the web site Rayground, which allows people to both develop and run ray tracing applications in their web browser.
WebRays support shaders written in glsl via WebGL, and provides a JavaScript host-side API, and glsl device-side API to enable hardware accelerated ray-triangle intersections.
This ray-casting API has been designed to allow both wavefront and megakernel style architectures, and tries to make room for any style of ray tracing pipeline the user wants.
Rayground is a novel online framework for fast prototyping and interactive demonstration of ray tracing algorithms. It aims to set the ground for the online development of
ray-traced visualization algorithms in an accessible manner for everyone, stripping off the mechanics that get in the way of creativity and the understanding of the core concepts.
Due to the COVID-19 pandemic, remote teaching and online coursework have taken center stage. In this work, we demonstrate how Rayground can incorporate advanced instructive rendering media
during online lectures as well as offer attractive student assignments in an engaging, hands-on manner. We cover things to consider when building or porting methods to this new development
platform, best practices in remote teaching and learning activities and time-tested assessment and grading strategies suitable for fully online university courses.
Spatial queries to infer information from the neighborhood of a set of points are very frequently performed in rendering and geometry processing algorithms. Traditionally, these are accomplished using radius and k-nearest neighbors search operations, which utilize kd-trees and other specialized spatial data structures that fall short of delivering high performance. Recently, advances in ray tracing performance, with respect to both acceleration data structure construction and ray traversal times, have resulted in a wide adoption of the ray tracing paradigm for graphics-related tasks that spread beyond typical image synthesis. In this work, we propose an alternative formulation of the radius search operation that maps the problem to the ray tracing paradigm, in order to take advantage of the available GPU-accelerated solutions for it. We demonstrate the performance gain relative to traditional spatial search methods, especially on dynamically updated sample sets, using two representative applications: geometry processing point-wise operations on scanned point clouds and global illumination via progressive photon mapping.
Ray tracing on the GPU has been synergistically operating alongside rasterization in interactive rendering engines for some time now, in order to accurately capture certain illumination effects. In the same spirit, in this paper, we propose an implementation of Progressive Photon Mapping entirely on the rasterization pipeline, which is agnostic to the specific GPU architecture, in order to synthesise images at interactive rates. While any GPU ray tracing architecture can be used for photon mapping, performing ray traversal in image space minimises acceleration data structure construction time and supports arbitrarily complex and fully dynamic geometry. Furthermore, this strategy maximises data structure reuse by encompassing rasterization, ray tracing and photon gathering tasks in a single data structure. Both eye and light paths of arbitrary depth are traced on multi-view deep G-buffers and photon flux is gathered by a properly adapted multi-view photon splatting. In contrast to previous methods exploiting rasterization to some extent, due to our novel indirect photon splatting approach, any event combination present in photon mapping is captured. We evaluate our method using typical test scenes and scenarios for photon mapping methods and show how our approach outperforms typical GPU-based progressive photon mapping.
In the past few years, advances in graphics hardware have fuelled an explosion of research and development in the field of interactive and real-time rendering in screen space. Following this trend, a rapidly increasing number of applications rely on multifragment rendering solutions to develop visually convincing graphics applications with dynamic content. The main advantage of these approaches is that they encompass additional rasterised geometry, by retaining more information from the fragment sampling domain, thus augmenting the visibility determination stage. With this survey, we provide an overview of and insight into the extensive, yet active research and respective literature on multifragment rendering. We formally present the multifragment rendering pipeline, clearly identifying the construction strategies, the core image operation categories and their mapping to the respective applications. We describe features and trade-offs for each class of techniques, pointing out GPU optimisations and limitations and provide practical recommendations for choosing an appropriate method for each application. Finally, we offer fruitful context for discussion by outlining some existing problems and challenges as well as by presenting opportunities for impactful future research directions.
Lighting plays a very important role in interior design. However, in the specific problem of furniture layout recommendation, illumination has been either neglected or addressed with empirical or very simplified solutions. The effectiveness of a particular layout in its expected task performance can be greatly affected by daylighting and artificial illumination in a non-trivial manner. In this paper, we introduce a robust method for furniture layout optimization guided by illumination constraints. The method takes into account all dominant light sources, such as sun light, skylighting and fixtures, while also being able to handle movable light emitters. For this task, the method introduces multiple generic illumination constraints and physically-based light transport estimators, operating alongside typical geometric design guidelines, in a unified manner. We demonstrate how to produce furniture arrangements that comply with important safety, comfort and efficiency illumination criteria, such as glare suppression, under complex light-environment interactions, which are very hard to handle using empirical or simplified models.
We introduce a novel approach to support fast and efficient lossy compression of arbitrary animation sequences ideally suited for real-time scenarios, such as streaming and content creation applications, where input is not known a-priori and is dynamically generated. The presented method exploits temporal coherence by altering the principal component analysis (PCA) procedure from a batch- to an adaptive-basis aiming to simultaneously support three important, generally conflicting in prior art, objectives: fast compression times, reduced memory requirements and high quality reproduction results. To that end, we show how the problem of tracking subspaces via adaptive orthogonal iterations can be successfully applied to support bandwidth- as well as error-consistent encoding of sequentially processed animated data. A dynamic compression pipeline is presented that can efficiently approximate the k-largest PCA bases based on the previous iteration (frame block) at a significantly lower complexity than directly computing the singular value decomposition. To avoid under-fitting when a fixed number of basis vectors is used for all frame blocks, a flexible solution that automatically identifies the optimal subspace size for each one is also offered. An extensive experimental study is finally offered showing that our method is superior in terms of performance as compared to several direct PCA-based schemes while, at the same time, achieves plausible reconstruction output despite the constraints posed by arbitrarily complex animated scenarios.
Depth-sorted fragment determination is fundamental for a host of image-based techniques which simulates complex rendering effects. It is also a challenging task in terms of time and space required when rasterizing scenes with high depth complexity. When low graphics memory requirements are of utmost importance, k-buffer can objectively be considered as the most preferred framework which advantageously ensures the correct depth order on a subset of all generated fragments. Although various alternatives have been introduced to partially or completely alleviate the noticeable quality artifacts produced by the initial k-buffer algorithm in the expense of memory increase or performance downgrade, appropriate tools to automatically and dynamically compute the most suitable value of k are still missing. To this end, we introduce k+-buffer, a fast framework that accurately simulates the behavior of k-buffer in a single rendering pass. Two memory-bounded data structures: (i) the max-array and (ii) the max-heap are developed on the GPU to concurrently maintain the k-foremost fragments per pixel by exploring pixel synchronization and fragment culling. Memory-friendly strategies are further introduced to dynamically (a) lessen the wasteful memory allocation of individual pixels with low depth complexity frequencies, (b) minimize the allocated size of k-buffer according to different application goals and hardware limitations via a straightforward depth histogram analysis and (c) manage local GPU cache with a fixed-memory depth-sorting mechanism. Finally, an extensive experimental evaluation is provided demonstrating the advantages of our work over all prior k-buffer variants in terms of memory usage, performance cost and image quality.
Many applications require operations on multiple fragments that result from ray casting at the same pixel location. To this end, several approaches have been introduced that process for each pixel one or more fragments per rendering pass, so as to produce a multifragment effect. However, multifragment rasterization is susceptible to flickering artifacts when two or more visible fragments of the scene have identical depth values. This phenomenon is called coplanarity or Z-fighting and incurs various unpleasant and unintuitive results when rendering complex multilayer scenes. In this work, we develop depth-fighting aware algorithms for reducing, eliminating and/or detecting related flaws in scenes suffering from duplicate geometry. We adapt previously presented single and multipass rendering methods, providing alternatives for both commodity and modern graphics hardware. We report on the efficiency and robustness of all these alternatives and provide comprehensive comparison results. Finally, visual results are offered illustrating the effectiveness of our variants for a number of applications where depth accuracy and order are of critical importance.
We explore different semantics for the solid defined by a self-crossing surface (immersed sub-manifold). Specifically, we introduce rules for the interior/exterior classification of the connected components of the complement of a self-crossing surface produced through a continuous deformation process of an initial embedded manifold. We propose efficient GPU algorithms for rendering the boundary of the regularized union of the interior components, which is a subset of the initial surface and is called the trimmed boundary or simply the trim. This classification and rendering process is accomplished in realtime through a rasterization process without computing any self-intersection curve, and hence is suited to support animations of self-crossing surfaces. The solid bounded by the trim can be combined with other solids and with half-spaces using Boolean operations and hence may be capped (trimmed by a half-space) or used as a primitive in direct CSG rendering. Being able to render the trim in realtime makes it possible to adapt the tessellation of the trim in realtime by using view-dependent levels-of-details or adaptive subdivision.
In this paper, we present a skeletal rigid skinning approach. First, we describe a skeleton extraction technique that produces refined skeletons appropriate for animation from decomposed character models. Then, to avoid the artifacts generated in previous skinning approaches and the associated high training costs, we develop an efficient and robust rigid skinning technique that applies blending patches around joints. To achieve real time animation, we have adapted all steps of our rigid skinning algorithm so that they are performed efficiently on the GPU. Finally, we present an evaluation of our methods against four criteria: efficiency, quality, scope, and robustness.
We introduce a simplification method for light probe configurations that preserves the indirect illumination distribution in scenes with diverse lighting conditions. An iterative graph simplification algorithm discards the probes that, according to a set of evaluation points, have the least impact on the global light field. Our approach is simple, generic and aims at improving the repetitive and often non-intuitive and tedious task of placing light probes on complex virtual environments.
In this paper, we present Rayground; an online, interactive education tool for richer in-class teaching and gradual self-study, which provides a convenient introduction into practical ray tracing through a standard shader-based programming interface. Setting up a basic ray tracing framework via modern graphics APIs, such as DirectX 12 and Vulkan, results in complex and verbose code that can be intimidating even for very competent students. On the other hand, Rayground aims to demystify ray tracing fundamentals, by providing a well-defined WebGL-based programmable graphics pipeline of configurable distinct ray tracing stages coupled with a simple scene description format. An extensive discussion is further offered describing how both undergraduate and postgraduate computer graphics theoretical lectures and laboratory sessions can be enhanced by our work, to achieve a broad understanding of the underlying concepts. Rayground is open, cross-platform, and available to everyone.
Successfully predicting visual attention can significantly improve many aspects of computer graphics and games. Despite the thorough investigation in this area, selective rendering has not addressed so far fragment visibility determination problems. To this end, we present the first ''selective multi-fragment rendering'' solution that alters the classic k-buffer construction procedure from a fixed-k to a variable-k per-pixel fragment allocation guided by an importance-driven model. Given a fixed memory budget, the idea is to allocate more fragment layers in parts of the image that need them most or contribute more significantly to the visual result. An importance map, dynamically estimated per frame based on several criteria, is used for the distribution of the fragment layers across the image. We illustrate the effectiveness and quality superiority of our approach in comparison to previous methods when performing order-independent transparency rendering in various, high depth-complexity, scenarios.
✝ These authors contributed equally to this work.
In computer graphics, animation compression is essential for efficient storage, streaming and reproduction of animated meshes. Previous work has presented efficient techniques for compression using skinning transformations to derive the animated mesh from a reference pose. We present a pose-to-pose approach to skinning animated meshes by observing that only small deformation variations will normally occur between consecutive poses. The transformations are applied so that a new pose is derived by deforming the geometry of the previous pose, thus maintaining temporal coherence in the parameter space, reducing approximation error and facilitating forward propagated editing of arbitrary poses.
We introduce a novel approach to image-space ray tracing ideally suited for the photorealistic synthesis of fully dynamic environments at interactive frame rates. Our method, designed entirely on the rasterization pipeline, alters the acceleration data structure construction from a per-fragment to a per-primitive basis in order to simultaneously support three important, generally conflicting in prior art, objectives: fast construction times, analytic intersection tests and reduced memory requirements. In every frame, our algorithm operates in two stages: A compact representation of the scene geometry is built based on primitive linked-lists, followed by a traversal step that decouples the ray-primitive intersection tests from the illumination calculations; a process inspired by deferred rendering and the path integral formulation of light transport. Efficient empty space skipping is achieved by exploiting several culling optimizations both in xy- and z-space, such as pixel frustum clipping, depth subdivision and lossless buffer down-scaling. An extensive experimental study is finally offered showing that our method advances the area of image-based ray tracing under the constraints posed by arbitrarily complex and animated scenarios.
We introduce a generic method for interactive ray tracing, able to support complex and dynamic environments, without the need for precomputations or the maintenance of additional spatial data structures. Our method, which relies entirely on the rasterization pipeline, stores fragment information for the entire scene on a multiview and multilayer structure and marches through depth layers to capture both near and distant information for illumination computations. Ray tracing is efficiently achieved by concurrently traversing a novel cube-mapped A-buffer variant in image space that exploits GPU-accelerated double linked lists, decoupled storage, uniform depth subdivision and empty space skipping on a per-fragment basis. We illustrate the effectiveness and quality of our approach on path tracing and ambient occlusion implementations in scenarios, where full scene coverage is of major importance. Finally, we report on the performance and memory usage of our pipeline and compare it against GPGPU ray tracing approaches.
In this work, we investigate an efficient approach to treat fragment racing when computing k-nearest fragments. Based on the observation that knowing the depth position of the k-th fragment we can optimally find the k-closest fragments, we introduce a novel fragment culling component by employing occupancy maps. Without any software redesign, the proposed scheme can easily be attached at any k-buffer pipeline to efficiently perform early-z culling. Finally, we report on the efficiency, memory space, and robustness of the upgraded k-buffer alternatives providing comprehensive comparison results.
In this work, we investigate an efficient approach to treat fragment racing when computing k-nearest fragments. Based on the observation that knowing the depth position of the k-th fragment we can optimally find the k-closest ones, we introduce a novel orderindependent fragment culling component, easily attached to the k+ buffer pipeline. An additional rendering pass of the scene’s geometry is initially employed to construct a per pixel binary fragment occupancy discretization. Then, the nearest depth of the k-th per pixel fragment is concurrently computed by performing bit counting operations and subsequently utilized to perform early-z rejection for the k+ buffer construction process that follows. Any fragment with depth larger than this value will fail the depth test, avoiding the cost of its pixel shading execution. Note that no software modifications are required to the actual k+ buffer implementation.
We report on the development of a novel interactive augmented reality app called AR-TagBrowse, built on Unity 3D that enables users to tag and browse 3D objects. Users upload 3D objects (polygonal representation and diffuse maps) through a web server. 3D objects are then linked to real world information such as images and GPS location. Users may optionally segment the objects into areas of interest. Such objects will subsequently pop up in the AR-TagBrowse app when one of these events is detected (visible location or image). The user is then capable of interactively viewing the 3D object, browsing tags or entering new tags providing comments or information for specific parts of the object.
k-buffer facilitates novel approaches to multi-fragment rendering and visualization for developing interactive applications on the GPU. Various alternatives have been proposed to alleviate its memory hazards and to avoid completely or partially the necessity of geometry pre-sorting. However, that came with the burden of excessive memory allocation and depth precision artifacts. We introduce k+-buffer, a fast and accurate framework that simulates the k-buffer behavior by exploiting fragment culling and pixel synchronization. Two GPU-accelerated data structures have been developed: (i) the max-array and (ii) the max-heap. These memory-bounded data structures accurately maintain the k-foremost fragments per pixel in a single geometry pass. The choice of the data structure depends on the size k (application-dependent). Without any software-redesign, the proposed scheme can be adapted to perform as a Z-buffer or an A-buffer capturing a single or all generated fragments, respectively. A memory-friendly strategy is also proposed, extending the proposed pipeline to dynamically lessen the potential wasteful memory allocation. Finally, an extensive experimental evaluation is provided demonstrating the advantages of k+-buffer over all prior k-buffer variants in terms of memory usage, performance cost and image quality.
This work introduces S-buffer, an efficient and memory-friendly gpu-accelerated A-buffer architecture for multi-fragment rendering. Memory is organized into variable contiguous regions for each pixel, thus avoiding limitations set in linked-lists and fixed-array techniques. S-buffer exploits fragment distribution for precise allocation of the needed storage and pixel sparsity (empty pixel ratio) for computing the memory offsets for each pixel in a parallel fashion. An experimental comparative evaluation of our technique over previous multi-fragment rendering approaches in terms of memory and performance is provided.
Efficient capturing of the entire topological and geometric information of a 3D scene is an important feature in many graphics applications for rendering multi-fragment effects. Example applications include order independent transparency, volume rendering, CSG rendering, trimming, and shadow mapping all of which require operations on more than one fragment per pixel location. An influential multi-pass technique is front-to-back (F2B) depth peeling which works by peeling off a single fragment per pass and by exploiting the GPU capabilities to accumulate the final result. The major drawback of this peeling algorithm is that fragment layers with depth identical to the fragment depth detected in the previous pass are discarded and so not peeled. Stencil Routed A-buffer (SRAB) treats z-fighting for sorted fragments. However, SRAB is limited by the resolution of the stencil buffer and is incompatible with hardware supported multisample antialiasing. k-buffer processes k fragments in a single pass, thus performing up to k times faster than F2B. k-buffer suffers from read-modify-write hazards and needs a small fixed amount of additional memory which is allocated in the form of multi render target buffers. Similarly to SRAB, k-buffer requires a pre-sorting of the primitives of the scene to treat correctly up to k Z-fighting fragments. In this work, we introduce a novel technique for commodity graphics hardware that completely treats Z-fighting by extending F2B depth peeling with the overhead of one extra geometry pass. To speed up depth peeling at scenes with large number of layers with same depth values, we also propose an approximate z-fighting free depth peeling technique that combines the F2B and the k-buffer algorithms.
In computer animation, key-frame compression is essential for efficient storage, processing and reproduction of animation sequences. Previous work has presented efficient techniques for compression using affine or rigid transformations to derive the skin from the initial pose using a relatively small number of control joints. We present a novel pose-to-pose approach to skinning animated meshes by observing that only small deformation variations will normally occur between sequential poses. The transformations are applied so as a new pose is derived by transforming the vertices of the previous pose, thus maintaining temporal coherence in the parameter space, reducing error and enabling a novel forward propagated editing of arbitrary animation frames.
Skeleton-based skinning is widely used for realistic animation of complex characters defining mesh movement as a function of the underlying skeleton. In this paper, we propose a new robust skeletal animation framework for 3D articulated models. The contribution of this work is twofold. First, we present refinement techniques for improving skeletal representation based on local characteristics which are extracted using centroids and principal axes of the character’s components. Then, we use rigid skinning deformations to achieve realistic motion avoiding vertex weights. A novel method eliminates the artifacts caused by self-intersections, providing sufficiently smooth skin deformation.
This report presents a thorough investigation of the complex and active research area in both interactive global illumination (GI) and inverse lighting (IL) problems, with a focus on interactive applications and dynamic environments.
This thesis studies the problem of direct rendering skinned approximations of arbitrary deformable objects which may also self-intersect on the graphics hardware, which is an important topic in computer animation and visualization. First, we provide efficient methodologies for editable segmentation and skinning representations of arbitrary animated mesh sequences that exploit temporal coherence from a pose-to-pose perspective. Second, we develop rendering algorithms for efficient detection and trimming of (self)-crossing surfaces in the image-space, realized through novel multi-fragment rasterization, without computing any intersections. Since capturing multiple fragments efficiently on the GPU is a challenging task in terms of time, memory and robustness, we study several aspects of the multi-fragment rendering problem from various perspectives and present alternatives for reducing fragment-contention, eliminating z-fighting and avoiding fragment-overflow.
In this dissertation, we propose a novel robust skeleton-based animation framework of articulated modular solid objects. The contribution of this work is twofold. First, we present refinement techniques for improving the skeletal representation based on local characteristics which are extracted using centroids and principal axes of the components of the character. Skeleton-based animation is then performed using forward kinematics and quaternions. The components position varies over time, guided by an animation controller. Then, we use rigid skinning deformations by assigning each skin vertex one driver bone to achieve realistic skin motion avoiding vertex weights. A novel method eliminates the artifacts caused by self-intersections, especially in areas near joints, providing sufficiently smooth skin deformation. Finally, we have implemented all the above steps and we perform an extensive experimental evaluation of our suite of techniques with respect to efficiency, robustness and quality of the final animation outcome.
This document contains a comprehensive list of (currently active) annual international events and premier publishers of science and technology resources: link.
A collection of different 3D levels for Gravity Ball; a marker-based Augmented Reality game developed in Frailsafe project, targeted for mobile devices. The goal is to guide a virtual sphere into the level’s hole, the finishing point, as fast and as steadily as possible by moving the tangible handheld marker (virtual textured terrain) accordingly.
3D digitization of a Belem Tower souvenir; a delicate cultural heritage object bought from Lisboa, Portugal. This task included the digital recording via a 3D handhold laser scanner as well as the data processing of the digitized object, which mainly involves geometric data repairing & fairing.
Tablecloth animation that realistically falls from a table is created using Blender's cloth simulator.
Skirt deformation is created using Blender's cloth simulator.
Skirt 5095 Vertices 360 Frames 31 MB
CGI2016s
Flag animation that realistically blow in the wind is created using Blender's cloth simulator.
Flag 2704 Vertices 1000 Frames 45 MB
VC2017
Tsunami simulation created using Blender's ocean simulator.
Tsunami 4225 Vertices 1250 Frames 131 MB
VC2017
Sea foam simulation generated under the influence of interactive user manipulation of a pink ball and Blender's ocean simulator.
Ocean 2500 Vertices 1500 Frames 76 MB
VC2017
A point cloud animation that consists of three disjointed subsequences. Each one is generated by moving (morphing) from one number into another with the aim of forming the word ''2017''. Each number represents a keyed particle system that randomly place points inside its volume. This animation was created via Blender Modeling Software.
Morphing 2017 5000 Vertices 600 Frames 30 MB
VC2017
A highly complex animation genereted after applying a number of concurrent local self-crossing deformation operations (free-form deformations in conjunction with Laplacian smoothing) on a jug object.
Self-intersecting Jug 7478 Vertices 500 Frames 101 MB
CAD2013