Admin Rodney Posted April 13 Admin Posted April 13 Some of this will seem silly... so bear with me. I've been taking a rather naive approach to creating models and choreographies (and other A:M assets) with AI from time to time. This has mostly been as experimentation and definitely not using useful constructs such as the A:M SDK. Part of this approach is that I don't know what I don't yet know and so I'm learning right along with AI as it fumbles its way toward success. First, let me say that I have found creating complex models for AI to be easier when kitbashing shapes together into a Choreography. The shapes are different models and easily dropped into place and manipulated. Troubleshooting tends to be easier and simple models can be swapped out with more complex configurations. I'll share some of that exploration where I had AI create some basic vehicles etc. But here I'm exploring the creation of splines and patches inside a model file. My first attempt to have AI create a simple robot: You be the judge on whether that could be considered a success... I let ChatGPT know that was a little too basic and shared this image so it could try again. The second pass: We could pursue a voxel style approach but I know we can do better than that. Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 So here I asked ChatGPT to move individual Control Points inward and outward to smooth out the shapes. I provided an image of the previous result just in case that might help. You surely have noticed a fatal flaw in my approach thus far. My basic shape has 5 point patches capping the top and bottom of the shape. That is intentional as we need to be solving for 5 point patches along the way. We just haven't dove very deeply in that direction (yet). At this particular stage I am seriously wanting to tell ChatGPT that all bottom CPs of a shape should be in the same location as a corresponding CP on the top of an adjacent shape. But should I chase that rabbit... Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 Here's a new 'base shape' I'm testing to see how well ChaptGPT connects the dangling splines to the bottom of other base shapes. Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 After showing it two base shapes unconnected and two base shapes connected... time to challenge Chat to connect these three base shapes: Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 Result*: *There was as slight glitch that was easily corrected by deleting a spline ring and then undoing that deletion. Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 Still a long... long way to go but getting there little by little. 1 Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 It appears I edited/deleted my interim postings when I thought I was replying. In those postings I shared the result of asking ChatGPT to use the new base shapes to recreate our robot again: This being what it came up with at that first new stage. Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 I am severly distracted by the idea of having ChatGPT understand how to maintain 'straights versus curves' when smoothing shapes. In other words if one side of a model such as an arm is relatively flat the opposing side should be curved. That level of Control Point refinement is surely best kept for later in the modeling process. Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 Since we are here, I'll share a few of my early experiments with combining shapes from the A:M LIbrary into models via Choreography files. Again, the idea being that ChatGPT or other generative processes knows what certain shapes are based on the name of the model file. It has examples of common shapes such as what is in A:M's Library. Tell it to create a truck using those models and... Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 Perhaps a car made out of (modified) simple cubes: Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 And once we have a model created its trivial to duplicate and place copies of that model, staged to our specifications: Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 Here's an example of cubes placed in space based on a set of 3 sequential images. Each frame being a grid where a space is either on or off (filled with a cube or not). Frame 1 places the red cubes Frame 2 places the green cubes Frame 3 places the blue cubes Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 How about this earlier test, "Replace every other cube with a sphere" At this point I was also generating random surface color for each model. Quote
Admin Rodney Posted April 13 Author Admin Posted April 13 I did play with animation a little. Having ChatGPT create a tunnel (of two walls, a floor and ceiling) of 140 repeated sections (electical boxes every x number of sections).. Bouncing ball moving in front of the camera as the camera moves down the tunnel. Quote
Pizza Time Posted April 13 Posted April 13 How do you get Chatgbt to do make the model for you ? CAn it really make Has models.CAN you walk us through on how you did it? Quote
Admin Rodney Posted April 14 Author Admin Posted April 14 The main thing I do is give ChatGPT examples and then task it with doing something related to that example. Some things are trivial. For instance, replacing one model with another model. All it has to do in that case is change the name of the model being referenced. Consider for instance, you want to swap out the materials on a large set of models... or give many models different decals. ChatGPT just needs to know what materials or decal images you want to use and then it will perform that for us. These are the easy case as this process is basically just automating what we would otherwise be doing manually. It does help that most Animation:Master files are text based and ChatGPT (and python) work great with text based formats. So, we train AI on what a valid file for Animation:Master is with the more examples it has the better. It then can use that data to 'recognize' patterns. Think of it in these terms: How does an ai image generator know what to create when you tell it to create an image of a cat, a house, a building, etc.? The answer is that the ai model has been trained on data that classifies bits and bytes of data as cat-like, or house-like, etc. This same thing can be true of spline patch models but the challenge is that (to my knowledge) no one has trained any large language models on Hash Inc's file formats and data. They do generally speaking have access to information and data about those but likely not specific enough for our purposes. One of the projects I didn't share here was having ChatGPT stage a living room with couch, TV, etc. but with all objects made out of a single cube. It would be trivial to point that project to a diretory with more detailed furniture/models.. even have ChatGPT do that. AI isn't performing magic. It's doing what we might do otherwise... just a whole lot faster. It also can and does make mistakes faster. Note that I am using the paid version of ChatGPT so it does have some capabilities that the free version will not have. As such, it will be good to test for the differences as we do want to use free processes wherever possible. I need to attend some of the Saturday Live AnswerTime sessions where I can walk through the process and those present can see the process, the struggle, etc. and help chart the best way forward. Short of that perhaps I can start to record my ai enabled project making sessions. Quote
Admin Rodney Posted April 14 Author Admin Posted April 14 I do get distracted easily and chatgpt increases the odds of distraction considerably. I'll get an idea and explore that until I'm satisfied or meet sufficient resistance to break away. For instance, I just thought of drawing objects in Opentoonz and having ChatGPT generate a 3D choreography based on that drawn scene. Initial results are promising. The crutch in these early tests are that each 'object' I draw in Opentoonz is given a name. That name is then what Animation:Master will attempt to find when opening the .chor. As we know, if a model isn't found A:M will prompt the user for the model's location. Thus far I am mostly spinning cubes but... gotta start somewhere. Quote
Admin Rodney Posted April 14 Author Admin Posted April 14 For those curious, ChatGPT decided on its own to share a 'helper script' that converts an Opentoonz scene to an Animation:Master .cho file. Pretty basic stuff: import re from pathlib import Path TNZ_PATH = Path("twocubesrotating_onemoving.tnz") BASE_CHO_PATH = Path("twocubesrotating_onemoving_from_OpenToonz.cho") OUT_CHO_PATH = Path("twocubesrotating_onemoving_linearized.cho") def linear_values(v0: float, v1: float, frame_count: int = 42): return [v0 + (v1 - v0) * i / (frame_count - 1) for i in range(frame_count)] def spline_text(values): rows = [f"7 {i} {v:.6f}" for i, v in enumerate(values)] return "<SPLINE>\n" + "\n".join(rows) + "\n</SPLINE>" def replace_model_xy_splines(cho_text: str, model_filename: str, x0: float, x1: float, y0: float, y1: float) -> str: model_path = f'C:/Program Files/Hash Inc/V19.5/Data/Models/Primitives/{model_filename}' pattern = ( rf'(<MODELSHORTCUT>\s*Cache=\.\.\|\.\.\|\.\.\|Objects\|"{re.escape(model_path)}".*?' rf'<TRANSLATECHANNELDRIVER>\s*MatchName=X\s*)<SPLINE>.*?</SPLINE>' rf'(\s*</TRANSLATECHANNELDRIVER>\s*<TRANSLATECHANNELDRIVER>\s*MatchName=Y\s*)<SPLINE>.*?</SPLINE>' ) replacement = ( rf'\1{spline_text(linear_values(x0, x1))}' rf'\2{spline_text(linear_values(y0, y1))}' ) new_text, count = re.subn(pattern, replacement, cho_text, flags=re.S) if count != 1: raise RuntimeError(f"Expected exactly one replacement for {model_filename}, got {count}") return new_text def main(): cho_text = BASE_CHO_PATH.read_text(encoding="utf-8") # OT-to-A:M mapped values already established in the prior pass. cho_text = replace_model_xy_splines( cho_text, "Cube.mdl", -489.170000, -508.291000, 221.481000, 232.635000 ) cho_text = replace_model_xy_splines( cho_text, "Cube_2.mdl", 4.780160, 560.872000, 0.000000, -239.008000 ) OUT_CHO_PATH.write_text(cho_text, encoding="utf-8") print(f"Wrote {OUT_CHO_PATH}") if __name__ == "__main__": main() Keep in mind the end goal; we wouldn't want to ask ChatGPT to do the conversion every time when we can just run a script/program to do the conversion. Quote
Admin Rodney Posted April 14 Author Admin Posted April 14 Wanted to see if ChatGPT would draw a smiley face using 20 cubes: Upside down but not bad for a first try with no special guidance. Quote
Hash Fellow robcat2075 Posted April 16 Hash Fellow Posted April 16 On 4/13/2026 at 2:12 AM, Rodney said: Still a long... long way to go but getting there little by little. That one is pretty good, but how did it know the correct use of five-pointers without a previous example like that? Quote
Admin Rodney Posted April 17 Author Admin Posted April 17 Short answer: It didn't. Some of my posts were deleted by accident so I can see where that might be misleading. I manually stitched the separate parts that ai stitched together into one unibody mesh. Quote
Hash Fellow robcat2075 Posted April 17 Hash Fellow Posted April 17 3 hours ago, Rodney said: Short answer: It didn't. Some of my posts were deleted by accident so I can see where that might be misleading. I manually stitched the separate parts that ai stitched together into one unibody mesh. Have you tried showing it the pre-stitched and post-stitched versions to see if it can learn that? Quote
Admin Rodney Posted April 27 Author Admin Posted April 27 Baby steps. Baby steps. I am being overly conservative but I'm convinced there surely must be a method to my madness. Here I had ChatGPT (pretend to) understand: What a spline is What control points on the spline are What named groups are and why they are important to us moving forward (as we can use the groups programmatically to store information, record processes, and generally identify things at the spline/patch level) In attempting to get GPT to understand extrusion of a spline it failed initially. Upon redirecting it successfully extruded the next spline generated by the last extrude and correctly moved all CPs (and splines/patches) into an ALL group. For some reason I've long had an issue with Normals being flipped backward so the successful extrusion GPT made had that normal inversion flaw. I am always wary of the garbage in-garbage out principle and so I'm usually not surprised when some element is off. I very likely forgot to supply the correct source: In this case an example with normals flipped so the model can properly be seen from the front. At any rate, a very small success in generating splines/patches in a Model while hopefully getting some basic training into the mix. Most of our previous successes were using Choreographies. I'm proud of you GPT. Keep up the great work! ExtrudedSpline_6Patches_All_attempt_manuallyflippednormals.mdl Quote
Admin Rodney Posted April 27 Author Admin Posted April 27 I still haven't told GPT to correct the flipped normals but... Directing it to extrude the original spline 10 times and group each new extrusion with a incrementing name was successful in one shot. Even before this step I was already thinking of having the extrusions move back in Z depth but thus far I've kept everything in the vertical plane. ExtrudedSpline_10Extrusions_Groups_All.mdl Quote
Admin Rodney Posted April 27 Author Admin Posted April 27 For no particular reason I had GPT randomly fip normals on the patches: Quote
Admin Rodney Posted April 27 Author Admin Posted April 27 Wanted to see if GPT could figure out how to randomly apply different colors to each of the extrude groups: Stage unlocked. Quote
Admin Rodney Posted April 27 Author Admin Posted April 27 Quote move the CPs of every 3rd Extrude group back in Z depth by 10cm. Quote
Admin Rodney Posted April 27 Author Admin Posted April 27 This is an important step. Without providing any specific guidance on how to extrude horizontally GPT correctly extruded 10x to the right. Such a minor thing right? But I love those 1 shot successes. 1 Quote
Admin Rodney Posted April 27 Author Admin Posted April 27 Time to take some risks (a leap of faith) and generate colored grid models from images. Quote Given this 20x20 grid image replicate the grid as a spline-patch model with appropriate colors taken from the image. Produce 2 models. One with all of the grid squares and one without the white grid squares around the face. Result: I hedged the bet a little by having one model be the full grid and the other try to isolate the face. Something is odd about the model with isolated face but pretty impressive. The face is detached and separate from the other CPs and splines but those do not appear to be fully valid spline/patches. I'll be saving that model to see if A:M 'fixes' it in some way. Intriguing possibilities... Quote
Hash Fellow robcat2075 Posted April 27 Hash Fellow Posted April 27 The extrusion experiments look promising. The colored square stuff is giving it improper ideas about modeling. A next step for the extrusions might be to show it how hooks work and see if it can use them reduce the spline density. Quote
Admin Rodney Posted April 28 Author Admin Posted April 28 Quote The colored square stuff is giving it improper ideas about modeling. I agree not optimal modeling but I'm sure you'll agree grids are a very proper form of spline/patch modeling Quote A next step for the extrusions might be to show it how hooks work and see if it can use them reduce the spline density. Hooks, dangling splines, 3 and 5 pont patches... there are many variables to contend with. Which is why I've focused so far almost exclusively on 4 point patches. (The 5 point patches being an exception as I know that to be a quick way to cap any object lathed or modeled in such a way as to end in a 5 point spline. A simple introduction to Hooks is definitely something we can introduce at this stage. Easy to provide examples and to test. In theory a model of nothing but 3 point patches might be infinitely easier to use if converting from external model formats but even there continuity is a key obstacle. Continuity of splines may be the first and foremost obstacle of them all in modeling. Hooks at least provided a defined moment of termination of continuity which makes them a very logical next step to explore. I'll see wha GPT makes of this model with grid transitioning to less dense model via a Hook: Quote
Admin Rodney Posted April 28 Author Admin Posted April 28 GPT is struggling with the understanding of Hooks (although making some progress). It would be good to feed some information about Hooks from the A:M SDK. Of note: Animation:Master does a very good job of repairing models where it can. Where I am currently with Hook testing is getting GPT to generate the model, A:M identifying the error upon loading, and A:M fixing the error to best of its ability. In some cases this requires manual fixing to take the model to its intended state. (although not in my latest test as A:M fully repaired the problem) I then feed back the repaired model to GPT (along with any error messages) so it can compare the good model with the bad and make appropriate adjustments. Here's an example of a generated mesh that A:M repaired upon opening 1 Quote
Admin Rodney Posted April 28 Author Admin Posted April 28 I need to learn not to navigate away from my post before posting as it often leads to loss of that post. Animation:Master is very good at repairing file data. I am unaware of anywhere the process for that repair upon loading files into A:M is documented (including the A:M SDK). Those error handling routines seem very significant to me where it comes to resolving attempts to generate valid models (and files for use in A:M in general). Perhaps those error handing routines can be shared? But the deeper thought: Where GPT can target results that rise to the threshold where A:M can take over and repair the model... not optimal but highly significant. I've often considered A:M to be the best validator of its own file formatting and this 'self healing' aspect of modeling is one proof of concept of exact that. A:M may in cases repair an error to a working file but leave errors for the user to manually repair. I just had one of those where GPT created a mesh with valid Hook but created an error at the termination of other patches. That error was resolved by simply extruding that section out further. So some minor data was invalid but not enough to keep the model from loading into A:M. The error being obvious was easy to spot and a quick attempt to extrude (accidentally) made the repair. Unlike GPT, A:M being programmed to only create valid files/models. Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 I wasn't really getting anywhere with hooks so took a break. Returning to the task of generative modeling of A:M models I thought I'd lean toward continuity of splines but... without the full continuity. I introduced this shape (one single patch with dangling CPs on the splines) and told GPT to create a four legged chair in 3D space. I suggested that rather than try to resolve connectivity at corners to create a new mesh surface for all planar surfaces. It only got it about half right the first try. It did much better on second attempt: The dangling splines are not pretty but the idea would be to have GPT connect those surfaces via the dangling CPs. Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 Something I think of importance: Note how GPT has named all of the various surfaces of the model as Groups. This is useful on many levels but primarily for identification. If we were to tell GPT to connect SeatTop to the SeatFrontFace it might not actually do it correctly but at least it has named those parts so it knows what they are. Thus giving it a fighting chance of success. From a users perspective it saves a lot of time performing manual identification. 1 Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 Interestingly, GPT knew it was creating 9 parts for this bookshelf (as I told it to determine how many parts a model would require before creating the model) but it didn't know to combine the patch groups together into a new group name for that specific part. I'm attempting to remedy that oversight on my part. Here I've selected the groups that belong to the left side and created a new group with green surface so I can give GPT that feedback. Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 I'm certainly cheating a bit by generating models with flat planar surfaces but... again... method to madness. Had GPT add books to the bookcase and while it did it the first time the books looked more like boxes because of how wide they were. Second attempt: This is the point where I've introduced GPT to the concept of Group Folders. Before we were getting a lot of benefit out of Named Groups but organizing Groups into Folders will be even more useful. Specifically, the set of groups that is used just for generative modeling won't be very useful to the users so those groups can be placed in their own Folder. It is there where temporary data can be stored as well. Consider for instance where the group might store instructions and once a task is successfully completed that task/group is moved from one Group Folder to another. Groups specifically used for targeting to apply bones, decals, materials, etc. Some rules likely need to be devised in order to determine what groups can be removed without effecting the look of the model and which are essential. 1 Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 GPT did a pretty good job of organizing this bookshelf (and did so quickly): Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 All in all not bad for a generic bookshelf. It's nice that A:M works well with dangling splines... but I'm about to tell GPT to stop using them! Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 GPT is still struggling but it does get some basic things right. Here's an attempt to have GPT create 3D letters "AM" in red: I moved some CPs around. I'd have GPT try to merge the shapes but my confidence in that is low. Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 Giving up on that general approach I tried a prompt to generate a living room and this is the result in one shot: I have made the left wall transparent to better see the layout. I also fixed many flipped normals which A:M's Refind Normals didn't resolve. No dangling splines in this as GPT has moved on from that for now. My actual prompt for this: Quote Let's back up and with simple spline/patch shapes generate a living room model. Focus on adding simple planes and cubes of approprite size and location in the room, labeling each major collection of parts the item it is: couch, coffee table, TV, chair, etc. Take your time and plan everything out in advance. The down side of generating this in a model versus a Chor is that we cannot just point a reference to a model to a different model as these splines/patches are all part of the model. In a chor GPT could create a simple proxy model and reference that. We could then point A:M at a hero model just by changing the reference. Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 Given the constraint mentioned before about not being able to swap parts of a model as easily as we can swap models in a Chor I'm thinking that because GPT is creating Named Groups of all objects in the model we can use those named groups to extract parts of the model and place them into their own model. Thus we can load all of those models into the Chor where they are more easily swap-able. I have some other ideas here but am trying to stay focused on the task at hand. And minutes later, success! GPT used the Groups to extract the objects from the model and create new models for each object. It then provided all of those models in a zip file for download. I downloaded, extracted and loaded the models into A:M and drag/dropped them into the Chor. Because I told GPT to keep the objects in the same place they appear in the Chor just as they did in the Model but are now individual models that can be updated as needed. Note that I told GPT to only extract the models in the Furniture and Decor Folders (for some reason it had placed the rug and lamp into the Decor folder before instead of the Furniture folder). So no walls or other group's splines and patches were extracted. I'm going to call that one a success. Aside: I did give GPT the living room model with corrected normals and it suggested it might be able to improve it's output based on that comparison with the older (bad normal flipped) model. We shall see. Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 Animation:Master for the win again. Attached is the Project file with the entirely ChatGPT generated livng room all in 164kb. All model files embedded. Once the objects were extracted into their own models I used the TV model as a test for GPT to improve the detail of the TV. Once refined as a new model I then swapped the the TVs so both TVs are in the project but the refined TV is referenced in the Chor. I have not yet GPT'd decals or patch images in this iteration but have done tests with those before. (Any image would have to be included along with the project file anyway as project files can't store bitmap imagery**) Because we have a group for it in the model we can easily assign a patch image to the TV screen. **Technically A:M's text files can store image data but that's another day's discussion. (I'd love to see A:M support base64 image data but that would break backward compatibility. We can leverage the RLE bitmap data of the preview icons however. That preview can serve as a proxy image pending loading of higher resolution data. A downside of Base64 data is that file sizes can be huge.) GeneratedLivingRoom.prj Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 Celebrating that success with a render: livingroom.mp4 Quote
Admin Rodney Posted April 29 Author Admin Posted April 29 I should be heading for some sleep but haven't seen the inevitable moment when success stalls. Here I'm just iterating on the updating of one of the extracted models (the Coffee table) and giving it an update by haveing GPT place some additional proxy models onto it. In theory at least we could keep iterating and extracting models from models and adding detail to each as we go. Objects placed on the Coffee table include: stackof magazines, chess board with white and black pieces and a glass of water 1/3 full. Each of these objects present different challenges to generative modeling moving forward. For the magazine: eventual rotation, decals or patch images For the chessboard: Finely detailed organic and lathed objects For the glass of water: Transparency for the glass, cylindrical modeling (and yes, GPT did place blue 'water' inside the glass approx. 1/3 full) This would not be accomplished in the proxy state but after the objects are extracted into their own model. Additonal considerations: Since the models were extracted from the parent model they have a model bone at the same center location of the living room. A second bone needs to be placed at new model center in order to faciliate rotation in place (as opposed to rotation from center of room). In the case of the Chair I manually placed a bone and rotated the chair. More complex tasking would include Constraints such as an Aim At Constraint (with offsets) that would automatically orient a model toward a specified location. Quote
Hash Fellow robcat2075 Posted April 29 Hash Fellow Posted April 29 After you get some sleep you could show it that splines can turn corners Quote
fae_alba Posted Wednesday at 10:43 PM Posted Wednesday at 10:43 PM It should be easy enough to build a model that encompases all of the combined knowledge of what an A:M model is and how it is constructed, then do the same for rigging, etc. Pretty soon that AI model could be an additional tool in A:M, or available as a Saas api called by the A:M ui. Then we could take this to true scripting: Build me a female model that looks like ..... BUT: then it becomes an exercise of justifying that since you can do that quite easily on any ai generator now. Quote
Admin Rodney Posted Thursday at 02:34 AM Author Admin Posted Thursday at 02:34 AM Hi Paul, I agree that's where this is heading. Probably especially the 'build me a female model that looks like...' part. Ah, but this generative modeling (and animation) thing can be so very much more than that. Nowdays, unike in the past, we can connect ChatGPT and other services to many documents and I've not kept up with that aspect. I've been conditioned by past experience not to have ai with access to that extensive documentation. I do have a general path forward that still addresses my interest in learning as I go; basically slowly introducing GPT to specific documentation related to the task at hand. For instance, if having GPT work with Hooks, I need to be feeding documentation from the A:M SDK about handling/processing of Hooks. My current approach is intentionally naive in that it's more like watching GPT discover things. And when and where it fails I tend to discover even more interesting things. Where I believe this is beneficial (beyond my personal involvement and the gaining of understanding) is that we can stumble upon approaches not previously considered or documented. Few people are as naive as me so I have still get to claim that advantage. To date I've ran into several of these stumblings I never thought possible and all while learning more about the processes involved. For instance, a renewed appreciation for Named Groups and Folders was certainly not on my want list. Automatically extracting models from inside of models based on their Groups? Never came to mind before. A plus is that this makes me think of Models and Choreographies as more similar to each other than I did before. What other intersting things can be discovered through further comparison and contrast? What is probably the most intriguing thing to me is that someone that actually knows what they are doing can do so much more than I can. Where's Martin Hash when you need him! We are right at the cusp of a new age of Animation:Master! But no, (or should I say 'yes' in response to your post), we cannot compete with the magic that is out there. Most will likely bypass all the modeling, rigging and animating entirely. Just generate the result directly (the end user need not be concerned with any of that other stuff). But alas, here we are delving more deeply into what lies beyond the magic. Quote
Admin Rodney Posted Thursday at 02:42 AM Author Admin Posted Thursday at 02:42 AM I'm having GPT document some of this exploration thus far via 'field notes'. I can't vouch for the information in the PDF nor do I know if the python scripts trying to capture specific processes actually work... or even run. They surely aren't optimal. I just asked for the files to be generated as documention. Added: I see a copy of the PDF file is included in the zip file so if downloading the zip file, downloading the PDF separately is not needed. am_spline_patch_modeling_pack_2026-04-29.zip Animation_Master_Spline_Patch_Modeling_Notes_2026-04-29.pdf Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.