Click to Refresh Forum
Click to refresh Forum
Jump to content
Animation:Master

Recommended Posts

Posted

Rodney, I'd much rather stick 100% with A:M. There is something to be said about having the complete control over a model/animation that you get with A:M that you do not get with an AI generator. For example, I am working on a children's book publishing partner with an old work acquantance, and I am using an AI generator for the images. This description (@u6gA4zJVurJqgN4IlOt4 and @8vmAIe8pgljUIZRF4f1LCHARACTERS:
@u6gA4zJVurJqgN4IlOt4 wearing white dress shirt, sleeves rolled up, tie loosened with a tartan print, no hat, focused and concentrated, leaning over the table
@8vmAIe8pgljUIZRF4f1L is wearing loose fitting cable knit sweater and dark slacks, leaning toward the monitor with keen interest
SETTING:
Police precinct conference room, 1940s era
Industrial-style walls, fluorescent and desk lighting
Professional but worn interior typical of a busy detective bureau
PROPS & DETAILS:
Wooden or metal conference table cluttered with: scattered photographs, hand-written notes, city maps, case files
Computer monitor on one side of the table displaying security footage (showing silhouettes of two figures—Dr. Mitchel and her visitor)
Desk lamp casting directional light across the table
Writing implements, coffee cups
LIGHTING & ATMOSPHERE:
Film noir color palette: deep shadows, high contrast
Warm desk lamp light illuminating the investigation materials
Cooler fluorescent overhead lighting creating atmospheric shadows
Dramatic interplay of light and shadow across faces and table
MOOD/COMPOSITION:
Investigative intensity, focus, concentration
Close, intimate framing on the detectives and their work
Tension and discovery in the moment of recognition)

Produces this image:

research.png

 

But it took many tries to get what I wanted. The same goes for video. I could take this image, feed it into a generator, give a description, and it just does it. BUT, I have no real control of HOW it looks other than description. I'd much rather have compelete control over the chor. The real reason, for me, to use AI is time. I needed to generate over a dozen seperate inages with multiple characters, and write the book, all in a few weeks. No way I was going to pull that off without AI.

  • Hash Fellow
Posted

My hope for Rodney's AI venture is merely that it could do tedious tasks like taking a shape in OBJ form and making a proper lo-density spline MDL of it.

I don't want it to be making scenes for me or generating characters, but if it could do the things that don't require creativity, that woudl be useful.

  • Hash Fellow
Posted
22 hours ago, fae_alba said:


LIGHTING & ATMOSPHERE:
Film noir color palette: deep shadows, high contrast

I'm wondering why a children's book needs to look Noir.

 

Quote

The real reason, for me, to use AI is time. I needed to generate over a dozen seperate inages with multiple characters, and write the book, all in a few weeks. No way I was going to pull that off without AI.

 

If i had to get a dozen images for a children's book... I'd call Rodney! Rodney is very clever doodler. He makes great-looking characters. If it needed to be hi-contrast I'm sure he could do that too.

(I hope it's OK that I showed these )

RodneyDrawing.jpg   dragon.png

 

DrDs.jpg

  • love 1
  • Admin
Posted

I'd rather stick with A:M as well but that doesn't mean we still can't improve how we work with A:M.  :)

 

Here's a python script/model viewer I put together this morning:

image.png

  • Admin
Posted

As I was posting that last screenshot I added an option to use the Model's surface color to override the random patch colors.

image.png

The red dotted lines are (at least I think) patches with inverted normals.

In case you are wondering... this current viewer doesn't know what a Bone is.

 

  • Admin
Posted

I started to ask GPT to allow Group surface color but thought I should try loading a model with colored group to see what the viewer displayed... and that was already implemented.

The surface color (yellow) and the pants group (blue):

image.png

  • Admin
Posted

If nothing else it's a pretty handy diagnostic tool.

image.png

Where I think this might go however is a tool useful for organizing models where we can see the models as we are interacting with the files.

We don't need to rely on the icon previews (or generated them) where we can actually see the models.

  • Admin
Posted

This has been a fun exercise is figuring out how to render previews of lots of models in any given directory.

ModelManager.png

For instance, I had a few models with thousandds of trees from one test and needless to say those files made the program appear to freeze.

My first thought was to have the program place the grid squares with basic information about the model prior to rendering the model previews.

That helped quite a lot.

That still made rendering grid squares in the cue take a lot of time which is troublesome if early alphabetical filenames are denser models.

So, next the thought to render previews for all of the smaller models first and then go back and fill in te larger model previews.

I'll need to think on this more as populating the previews needs to be as immediate as possible.

One thing that would speed up the previews a lot would be to not display group color but where would the fun in that be?

 

Yeah, a 3/4 view of the model would probably be preferred over front view.

Posted

@Rodney  I like what you have created with AI. When it comes down to speed, Python is apparently slower that C++ once compiled. There are of course exceptions. The estimates I've seen in discussions:

For CPU-bound tasks, C++ is typically 10 to 100 times faster than standard Python (CPython). In extreme cases involving complex loops or mathematical calculations, C++ can be hundreds of times faster.

Keep up the AI experimentation.
  • Admin
Posted

(Forgot to post this)

Something I stumbled upon to speed up preview rendering:  Color everything gray at some filesize threshold such as 100kb (which the user change change).

This suggests that what we could do initially is render all model previews in gray and then go back and progressively update them to color.

 

Decided to set a Min and Max so the user can set the sweet spot for automatic coloring.

The gray pass goes first and presents all models rather quickly.

The user can then choose to see the colors based on adjstements to the Max setting.

The previews are cache so moving from one to another and then back doesn't automatically rerender the preview.

Now browsing to a directory with 100 models can preview everything almost instantly.

  • Admin
Posted

Glenn,

I agree.  C++ will be considerably faster.

That indeed would be the goal here.

Not to mention most users won't care to run python scripts when they are accustomed to running exe programs.

 

The key to that conversion will be finding the appropriate Libraries to build the C++ version.

GPT can assist in that as well AND.. I need to be leaning into that conversion process anyway as I have more than a few things I want to move from python to C++.

My longer term goal would be to spend most of my time in C++ but I bristle at the thought of connecting libraries and support files to MS Visual Studio as its been such a pain for me compared to the immediacy of typing something like 'pip install thiscoollibrary'.

 

I'll get there though!  I just need more successes under my belt to feel it's not removing years of my life in the process. :)

 

  • Admin
Posted

Two additions:

Added a 3/4 view option which should be the default as most models look better at that angle.

Added a export to contact sheet option

Interestingly, I forgot to state PNG as the export format so GPT created an SVG contact sheet.

That SVG file therefore contains all the polygon points of the models face in the model image.

GPT was smart enough to know we don't want to render the things that won't be seen although I could see where having Three.js versions of the models that could be turned around would be advantageous.

When consideting the contact sheet I was anticipating that it would be PNG imagery as we could first check to see if the model has a preview icon embedded and use that instread of generating a new image.

 

contactsheet.png

Posted
37 minutes ago, Rodney said:

I bristle at the thought of connecting libraries and support files to MS Visusal Studio as its been such a pain for me

I totally agree, going through that right now trying to create the SAMPLE.HXT and get it to compile. The instructions are for VS ver. 6. I used Chat GPT to give me a rough conversion from the version 6 instructions to where/what in VS 2026. Even though I configured the Additional Include Directories the same as it is on an existing plugin that compiles, they don't show up in the External Dependencies list and give errors "cannot open source file "stdafx.h"" for example. So, the battle continues on learning the VS 2026 IDE and C++.

  • Admin
Posted

One thing I'm thinking of with regard to the contact sheets is automating the process of building collections of models ala the Extras CD/DVD.

At this point I don't have the preview moving through subdirectories but that would be the idea.

When contact sheets are created I must assume they'd likely be placed in their own directory because in many cases models are placed in their own unique directory for organization purposes.

Clickin on the preview of the model would then send the user to that directory.

 

I probably need to back off this particular exploration and consider the best way for users to navigate through large collections of models.

The original Extras CD had a navigation system created byVernon that he got approval to use from its original author.

It worked quite well but we didn't have that same system for the Extras DVD.

 

A system that would allow easily preview and drag/dropping resources into Animation:Master would be ideal.

  • Admin
Posted

In thinking about how we might parse large collections of models here might be a general plan:

 

Modelmanager.py
    Main GUI
    Browses contact sheets
    Reads index/cache files
    Starts/stops background helper
    Opens model locations

modelmanager_worker.py
    Background scanner/renderer
    Recursively finds .mdl files
    Generates previews
    Builds contact sheet pages
    Writes metadata

 

The idea being to have the main program call the helper script to do the work behind the scenes.

The user then is free to navigate through those contact sheets that are available with minimal delay.

 

  • Admin
Posted

In thinking about how we might parse large collections of models here might be a general plan:

 

Modelmanager.py
    Main GUI
    Browses contact sheets
    Reads index/cache files
    Starts/stops background helper
    Opens model locations

modelmanager_worker.py
    Background scanner/renderer
    Recursively finds .mdl files
    Generates previews
    Builds contact sheet pages
    Writes metadata

 

The idea being to have the main program call the helper script to do the work behind the scenes.

The user then is free to navigate through those contact sheets that are available with minimal delay.

When done, a master html file can connect to all the contact sheets so that the user need not use the program to view the content unless they want to update the contact sheets and the content they contain.

 

Added:  Off to the side is a desire to have this process add the preview icons to the model files so that A:M itself can display what the models look like in A:M libraries.  An issue with this for the Extras CD/DVD was that most models (and other A:M files) do not have preview icons so their default counterparts for that type of file are shown instead.  This makes viewing those assets via libraries and other means less useful.  On the down side, adding the preview icon to the resource does add additonal size to the file as that image data is embedded in the text of the file.

  • Admin
Posted

As much as I like the 3/4 view there is an appeal to the front only view:

image.png

 

Older models have to be updated to appear in my modelmanager as I haven't added support for older models not in A:M's modern format.

These are random models grabbed from the Free Models section of this forum.

The '_26c' appended to the filename here stands for '2026 candidate' in a general review for modern compatibility.

 

 

  • Admin
Posted

It seems GPT picked up on the problem and added a pass for older (Legacy) models without me asking for it.

legacy.png

Model that didn't even appear before now appear as the initial passes gray silhouettes.

  • Admin
Posted

I took a break from exploring that ModelManager and went back to see if GPT remembered how to build basic shapes into reasonably recognizable models and extract models from that master model based on Named Groups it created while generating those objects.

Thee resulting master model and then the individual models extracted turned out pretty recognizable.

I dropped those models into a new (empty) Chor and everything fell into place. (minor adjustments of positioning for asthetics in the referenced models)

I did have to flip the majority of patch normals in all of the models manually as Find Normals in A:M didn't resolve that.

I'm now trying to get GPT to figure out why so many normals are pointing the wrong way.

 

I opened a few materials from A:M's Library and dropped those onto the individual models.

Turned off the Chor's default lights and added two of my own.

The result:

Kitchen.0001.png

 

  • Admin
Posted

GPT does struggle with more organize curved shapes but I chalk that up to me not giving it good examples to study.

Here's a sports car (where again I had to flip most of the normals manually):

SportsCar.png

I gave it a modified pipe.mdl from the A:M Library as a suggestion for the tires and that worked well.

I should have had it assign surface colors to the groups as it opted to color everything red.

Edit:  Actually it did color parts of the car differently but the last group was incuded everything and was colored red so it overwrote all the other group colors.

  • Admin
Posted

The sports car drag/dropped into a default Chor and positioned.

Added a tank as I thought the more mechanical angles of a tank might be easier for GPT to model.

sportscar.0001.png

Keep in mind that I'm not supplying any reference material on what these objects should look like.

GPT is doing the design work on its own.

  • Admin
Posted

I'm trying to get GPT to place the Named Groups that contain smaller details and not collections of objects (almost always with the ALL prefix) at the bottom of the heirarchical listing we get all the colors assigned automatically to those shapes to appear.  Currently GPT is inverting that listing which hides the color underneath groups that hide those colors.

Here GPT generated a (one shot) city block with storefronts and cars out on the street in front of the stores:

cityblock_storefronts.0001.png

This looks much better than when everything of the same kind is all the same color because of that group color overwriting surfaces assigned to other groups.

  • Admin
Posted

Okay, what's going on here?

In trying to model some curved splines and patches I had GPT create a treasure chest (not particularly successful).

A treasure chest needs gold coins though right?

So, had GPT made stacks of gold coins.

Successful but time consuming to tweak and rerun with variations.

So, what to do?

Answer:  Have GPT create a python program to create stacks of gold coins with easily adjusted settings.

Many of the variables were informed by my failures to create good looking stacks of gold coins.

For instance, if too close on top of each other the stacks look too much like long tall objects.

Even though the coins have random shades of orange and yellow.

So, need some distance between coins vertically.

Perfect stacks horizontally don't look good either.

So how about a 1 in 10 chance the coin will go in the same direction as the last coin placed?

Etc., Etc.

I started with GPT creating clusers of stacks with 1000 coins total.

Not the best starting demo and we want the user to set all those numbers as well (min and max for stacking etc. too)

100coinsTake1.png

Here's Take 1 results out of the python program that replicated the basic process GPT was using.

Not bad.

Save out a file with the settings for that (in case we want to recreate the same or similar set of coins (seed value allows us to get same results with random numbers)

Try 1000 coins (the programs current max count) as these models are being generated immediately and...

1000coinsTake1.png

 

As each coin/object has it's own group we can grab any coin we want and adjust.

Don't want to stack coins?

Point to a different model, such as a sheet of paper.

paperstacks.0001.png

These processes are pretty good at plussing up the Duplication WIzard.

Which reminds me.  I didn't add an option for rotating each object as it is placed.

 

 

 

 

  • Like 1

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...