Project Perfect Mod Forums
:: Home :: Get Hosted :: PPM FAQ :: Forum FAQ :: Privacy Policy :: Search :: Memberlist :: Usergroups :: Register :: Profile :: Log in to check your private messages :: Log in ::


The time now is Wed Nov 13, 2024 6:36 am
All times are UTC + 0
An idea about auto normals
Moderators: stucuk
Post new topic   Reply to topic Page 1 of 1 [11 Posts] Mark the topic unread ::  View previous topic :: View next topic
Author Message
TheNecro
Rocket Infantry


Joined: 22 Apr 2009
Location: >THE< United States of America

PostPosted: Fri Sep 29, 2017 7:38 pm    Post subject:  An idea about auto normals Reply with quote  Mark this post and the followings unread

First, let me start by saying, Banshee, that your work on VXLSE has been unequivocally one of the most important developments to C&C modding. But, I have to say, that for matching WW quality voxels, the auto normalizer has some glaring flaws. It has trouble with anything that is not a flat 2 voxel thick face. If there are curves, angles, or corners, it begins to break down. The more complex the face, the worse it gets.

Now, I have a theory, and it might be hard for you to implement, but it may be the key to achieving better quality with the auto normalizer.

What if, instead of only having color and normal painting options, we also had a face painting option that contained information only stored by VXLSE for the function of auto normalization. Something like this:

Those colors could be stored as a separate file that stored polygonal face information. So, here I was editing the rhino tank, so it could be a file called htnk.fce to go along with htnk.vxl and htnk.hva.

Different colors could represent different faces. Single colors would cause subtle gradations in curves and slight angles, and where two or more colors met, it would cause sharp edges and seams. Also, face information could force the auto normalizer to point all normals on a face outward, so that no outward facing normals would accidentally be pointing inward, and cause lighting acne. These colors would not, of course, be displayed in game, but only during face painting mode in VXLSE.

I don't know what your thoughts would be, Banshee, but hopefully this at least helps to move VXLSE forward. And, thank you, again, for your immense contributions to the RA2/TS modding community.

Back to top
View user's profile Send private message
Graion Dilach
Defense Minister


Joined: 22 Nov 2010
Location: Iszkaszentgyorgy, Hungary

PostPosted: Sat Sep 30, 2017 5:39 am    Post subject: Reply with quote  Mark this post and the followings unread

Please stop talking about WW quality voxels, it's irritating.

Westwood voxels are hollow, affected by black hole syndrome, have lousy texturing and overall usually are a glaring mess of 3D conversions. In no way you can call those voxels "quality" ones - thankfully the community passed the WW level of crapvoxels era a decade ago.

I don't argue that autonormalizing have issues dealing with curves, but the autonormalizer already handles a voxel better than how WW would treat it. And his doctorate is actually about the autonormalizer  - and the techniques used within - itself.

_________________
"If you didn't get angry and mad and frustrated, that means you don't care about the end result, and are doing something wrong." - Greg Kroah-Hartman
=======================
Past C&C projects: Attacque Supérior (2010-2019); Valiant Shades (2019-2021)
=======================
WeiDU mods: Random Graion Tweaks | Graion's Soundsets
Maintainance: Extra Expanded Enhanced Encounters! | BGEESpawn
Contributions: EE Fixpack | Enhanced Edition Trilogy | DSotSC (Trilogy) | UB_IWD | SotSC & a lot more...

Back to top
View user's profile Send private message Visit poster's website ModDB Profile ID
Mig Eater
Defense Minister


Joined: 13 Nov 2003
Location: Eindhoven

PostPosted: Sat Sep 30, 2017 5:52 am    Post subject: Reply with quote  Mark this post and the followings unread

This "face painting" looks exactly the same as just manually painting normals :/

_________________



Back to top
View user's profile Send private message Visit poster's website ModDB Profile ID YouTube User URL Facebook Profile URL Twitter Channel URL
Banshee
Supreme Banshee


Also Known As: banshee_revora (Steam)
Joined: 15 Aug 2002
Location: Brazil

PostPosted: Sat Sep 30, 2017 6:52 am    Post subject: Reply with quote  Mark this post and the followings unread

Graion Dilach wrote:
And his doctorate is actually about the autonormalizer  - and the techniques used within - itself


Nope, it has nothing to do with it. At least, not during my doctorate. In the future, once I expand my current doctorate research into 3D, I could use it to obtain much better auto-normals.

Back to top
View user's profile Send private message Visit poster's website Skype Account
TheNecro
Rocket Infantry


Joined: 22 Apr 2009
Location: >THE< United States of America

PostPosted: Sat Sep 30, 2017 7:19 am    Post subject: Reply with quote  Mark this post and the followings unread

Graion: Ok. That's nice. My last post was years ago from before you joined. I have been gone longer than you have been active! Whatever developments have occurred are new to me. Achieving WW level voxels was the goal then. Also, don't bash WW. They brought us the freaking game. They made the first voxels we had to aspire to. And, have you even tried to break down and study the Kirov model. The normals are really, quite damn good. Sure, there are some bad models in the bunch. I know the Rhino has some holes in it. The hollow thing doesn't really matter if you mostly see the top of the dang vehicle. Also, I'm sure Banshee is not at all perturbed by a simple thought. I made a suggestion as a "hmm... what if..." thing. I am not thinking I am better that whatever college doctorate he has, or is aspiring to. I even said I am not sure if what I am suggesting would be doable without a huge rewrite of code, and I even titled it "An idea about auto normals." So, don't come in here lobbing grenades at me.

MigEater: Well, it would be more to paint faces. So, like in Blender, 3DSMax, or even Wings, you designate faces by highlighting groups of polygons. Then once all faces are grouped into UV clusters, you unwrap the model(almost like peeling the skin of an orange, and then laying the orange peel out flat to get a 2D look at the object), and the UVs are applied, as well as the texture. I was thinking along those lines. But, now that I think about it, that was a really stupid thought, seeing how voxel models are constructed. Lol.

Banshee: Sorry, it was a dumb thought, Banshee. Lol. I had what I think is a better idea than my original. But, as I am not a coder, I could not put it into good terms, and it would probably come out sounding uber impractical again. Lol.

EDIT: Is this your work, Banshee?
Finding Surface Normals From Voxels
The only reason I ask, is cause of the mention of your doctorate, and this fascinating piece of academic work here on ppm about voxel normals and VXLSEIII now seems too coincidental. That was a very fascinating read, and if it is yours, then I bow to your knowledge, sir! I was reading on how the rays are cast looking for solid materials, or lack thereof. So, do voxels not use ordered vertex arrangement? So, life if vertexes read from first to last are clockwise cast the vector out, if the vertexes are read counter clockwise, cast the vector in? That's how it's done in most other 3D render suites. But, again, I have said I don't know that much about voxels... Sad

Also, does VXLSEIII find the vector for each exposed face of a voxel and then average between those vectors to find voxel facings?

Back to top
View user's profile Send private message
Banshee
Supreme Banshee


Also Known As: banshee_revora (Steam)
Joined: 15 Aug 2002
Location: Brazil

PostPosted: Sat Sep 30, 2017 3:44 pm    Post subject: Reply with quote  Mark this post and the followings unread

TheNecro wrote:
What if, instead of only having color and normal painting options, we also had a face painting option that contained information only stored by VXLSE for the function of auto normalization. Something like this:

...

Those colors could be stored as a separate file that stored polygonal face information. So, here I was editing the rhino tank, so it could be a file called htnk.fce to go along with htnk.vxl and htnk.hva.

Different colors could represent different faces. Single colors would cause subtle gradations in curves and slight angles, and where two or more colors met, it would cause sharp edges and seams. Also, face information could force the auto normalizer to point all normals on a face outward, so that no outward facing normals would accidentally be pointing inward, and cause lighting acne. These colors would not, of course, be displayed in game, but only during face painting mode in VXLSE.


That kind of information is something that VXLSE III does not have yet. It only knows the voxels separately... but it can't detect, at this time, which faces they belong to.

All auto-normalizers methods so far acts locally. It obtains the normal values for a voxel according to a neighborhood in a fixed size region around it.

Perhaps the easiest technique (still quite heavy in terms of computational resources) to use to detect faces would be something in the lines of Hough Transform.

However, this is not in my plans. My future attempt to change this approach would really be expanding my doctorate into 3D and using it instead. It would be better to detect discontinuities and deal with aliasing from discrete voxel data.

TheNecro wrote:
Banshee: Sorry, it was a dumb thought, Banshee. Lol. I had what I think is a better idea than my original. But, as I am not a coder, I could not put it into good terms, and it would probably come out sounding uber impractical again. Lol.


It is not dumb. It is just not practical at the moment. And if I implement my idea into the auto-normals tool, I don't know if it will be useful at all. But it is something to be considered indeed.


TheNecro wrote:
EDIT: Is this your work, Banshee?
Finding Surface Normals From Voxels


Yes, it is.


TheNecro wrote:
The only reason I ask, is cause of the mention of your doctorate, and this fascinating piece of academic work here on ppm about voxel normals and VXLSEIII now seems too coincidental. That was a very fascinating read, and if it is yours, then I bow to your knowledge, sir! I was reading on how the rays are cast looking for solid materials, or lack thereof. So, do voxels not use ordered vertex arrangement? So, life if vertexes read from first to last are clockwise cast the vector out, if the vertexes are read counter clockwise, cast the vector in? That's how it's done in most other 3D render suites. But, again, I have said I don't know that much about voxels...


Voxels from VXLSE III are simply organized in a 3D grid. Vertexes are not ordered in clockwise or anti-clockwise direction at all. Most 3D render suites uses polygonal meshes and, in this case, they must indeed make sure their vertexes are properly ordered (clockwise or anti-clockwise) when declaring the faces, which are 2D surfaces... which is why it makes sense to order them in that way. If these vertexes aren't ordered correctly, their normals gets messed up (pointing to the opposite direction).

Back to top
View user's profile Send private message Visit poster's website Skype Account
TheNecro
Rocket Infantry


Joined: 22 Apr 2009
Location: >THE< United States of America

PostPosted: Sat Sep 30, 2017 5:10 pm    Post subject: Reply with quote  Mark this post and the followings unread

I read through a good portion of Oliveira's main academic paper, and stopped to really process the Hough Transform segment. Forgive me if I am wrong, but it seems for HT you have to assume beforehand certain geometric shapes as optional constructs for the algorithm to comprehend. There sometimes exist a mixture of swooping curves, angled protrusions, and other odd shapes on a single face. How would the function detect some of the oddest shaped faces present on a voxel? Or, would you break the whole model down into understandable shapes, and then process it through the auto normalizer?

I think the problem with auto normalization currently, is that there is a lot to be read into what the artist is trying to represent within a model.

Maybe when the user clicks auto normalize, they could be allowed to select voxels that represent hard corners, then some variation of Hough Transform is used to process faces on the voxel. Then auto normalization would occur with both the human input, as well as computational assumption. Selecting a few voxels to represent corners could go a long way in keeping what the artist is representing clean, and wouldn't take very long.

Edge detection like this, maybe:

The yellow represents an edge that a user paints, which could be used to arrange normals so that the edge looks crisp and clean. Anything not selected as an edge, the program would smooth between adjacent faces. I don't know if how I am representing it makes sense. But, I figure you know what edge detection is. Haha.

Back to top
View user's profile Send private message
Banshee
Supreme Banshee


Also Known As: banshee_revora (Steam)
Joined: 15 Aug 2002
Location: Brazil

PostPosted: Sat Sep 30, 2017 8:38 pm    Post subject: Reply with quote  Mark this post and the followings unread

HT is an heuristic that decides which are the most adequate lines/curves by voting through a brute force approach. The work I cited uses a geometric algebra framework to treat lines and simple curves in the same way. Since it is an heuristic, the results are far from being accurate. This kind approach probably has a hard time to detect discontinuities in curves, but I haven't tried that on my own to know it clearly.

Anyway, the idea is to use a method like that to understand what is the most appropriate region around a voxel to obtain its normal, eliminating noises and discontinuities that could affect the accuracy of the result. Note that there is no possible accurate result for voxels that are located in 1-pixel lines or 1-pixel walls.

Back to top
View user's profile Send private message Visit poster's website Skype Account
TheNecro
Rocket Infantry


Joined: 22 Apr 2009
Location: >THE< United States of America

PostPosted: Wed Oct 04, 2017 7:23 pm    Post subject: Reply with quote  Mark this post and the followings unread

That's why it's so computationally heavy! It has to process and reprocess neighborhoods of voxels to find all possible shapes within the given space. Correct? Would user defined edges to divide up those spaces not ease calculations some? With hard edges pre defined, it would also reach a more user appealing outcome, and may even ease the program's ability to process normal facings.

On second thought, user defined edges might only give a nominal increase to calculations, if any at all.

I don't think many people understand, though, that auto normalization is not an end all to normalizing a voxel. Even with the most accurate results possible, user defined edges, and proper voxel facings, there would still be a need to fix any disparities on a voxel.

Back to top
View user's profile Send private message
Banshee
Supreme Banshee


Also Known As: banshee_revora (Steam)
Joined: 15 Aug 2002
Location: Brazil

PostPosted: Thu Oct 05, 2017 6:50 am    Post subject: Reply with quote  Mark this post and the followings unread

Neighborhoods? No, it's heavier. I think it works more like ray tracing.

Anyway, if you wanna go with user defined edges, use 3D models and convert them into voxels. If you can't make 3D models, use the 3D Modelizer from VXLSE III, export it to .obj and use a program like Blender to convert it to .3ds. There you define your shapes, smooth what has to be smoothed. Once you are done with that, convert it back to voxel with ViPr's program. At least until I write my solution for it in the future.

Back to top
View user's profile Send private message Visit poster's website Skype Account
TheNecro
Rocket Infantry


Joined: 22 Apr 2009
Location: >THE< United States of America

PostPosted: Thu Oct 05, 2017 7:26 pm    Post subject: Reply with quote  Mark this post and the followings unread

Wow. I am failing to fully grasp HTs. I understand they detect shapes by using vectors to build all possible lines in a subspace, and use each instance of a vector passing through a point in those spaces as a "vote" for a likely candidate for inclusion in a particular shape. I do not however understand how it builds possible shapes from the resulting vote scheme data.

There's a reason I am a game designer, and not a game coder. Lol.

Anyway, I had read about the 3DS2VXL method. That is more up my alley. Although, my study into the way VXLSE auto normalizes, as well as the order and position of all normals in index 4, has given me great insight into how to better perfect object lighting in my mod.

Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic Page 1 of 1 [11 Posts] Mark the topic unread ::  View previous topic :: View next topic
 
Share on TwitterShare on FacebookShare on Google+Share on DiggShare on RedditShare on PInterestShare on Del.icio.usShare on Stumble Upon
Quick Reply
Username:


If you are visually impaired or cannot otherwise answer the challenges below please contact the Administrator for help.


Write only two of the following words separated by a sharp: Brotherhood, unity, peace! 

 
You can post new topics in this forum
You can reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © phpBB Group

[ Time: 0.1984s ][ Queries: 11 (0.0091s) ][ Debug on ]