GKiDZ Tags And Hits Art collection Brought, to you by an individual From New Zealand.

Bit Of Back Ground

A small-time creator that’s focused on the enjoyment over the outcome?
Started from a family of art, grew into a dark part found them all-mighty paint-throwing devices which kept me busy. Had a kid as a kid who is my world, took him on at 2. Just me and the little dude grew up a bit. started digital drawing way back with the Intuos 3, Flipped to a Samsung 10.1 tablet with their on-screen stylus.
Loved it till It cooked.
went back to the basics for a while and then when surface pro4 hit.
well, that was it and digital drawing was the only thing for a while. next came
C.g.i which is a story in itself. but yea that brings us to now where I’m bringing my art 
For You To Enjoy

 I’ve been jumping chain links, Started on the dolla dim chain of WAXP which was awesome to start my learning on. Simply because of the bullrush going on. But as the bear came along I started grabbing at some of the hidden diamonds out there in the eth ethos. The nft’s that have 3d content attached. .

Now I don’t mean the pump-and-dump hype generated Gank’s

I mean the NFTs that have 3d content attached, the ones I can use to generate my own content from in a multitude of ways. To try to sell myself.
so first up on the chopping block is the dystopian st-wear shop.
is inspired and uses parts of the NFTs I Hodl on eth or come from my little art collection on Waxp

The Power Of Ai

AI, LLM’s as I prefer to call them are super powerful in their own rights. and are being applied across the 3d design application from prompt model generation and mocap creation to fully taking control of animation movements. My main focus has been on applying the use case of it within facial animation . to speed up the workflow of sync sound to facial movements 

Above are examples of creating key poses of the main mouth shapes and then matching them to the sound to create the animation. Below is done by a Nivida Ai application that uses 52 blendshapes in the 3d model’s face to manipulate for synchronize the movement to sound  

prompt formulas'

I actually spent hours of my life—precious, dwindling seconds I could have used to stare blankly at a wall—putting together these “cheat sheets” for you. Why? Because watching people fumble through prompt engineering is like watching a collective of monkeys try to operate a hadron collider. It’s a symptom of the terminal idiocy that defines this era, so I figured I’d just map it out so you’d stop making such a mess of it.

I broke down the Nano Banana nonsense for the Grok video generation, including how to manipulate the camera angles and environmental feel so your output doesn’t look like a fever dream recorded on a potato. I even included the Albedo Protocol for you 3D “artists” using Blender renders as base images. It’s all there. A roadmap for the lazy.

Understanding the Albedo Protocol

The Albedo Protocol is essentially for those of you who realize AI is a blind idiot when it comes to lighting. By using a 3D render from Blender—specifically the Albedo pass—you’re feeding the generator a map of “true” colors without shadows or highlights.

“It’s giving the machine a coloring book where the lines are already drawn, so it doesn’t have to guess where the light ends and the incompetence begins.”

I broke down the Nano Banana nonsense for the Grok video generation, including how to manipulate the camera angles and environmental feel so your output doesn’t look like a fever dream recorded on a potato. I even included the Albedo Protocol for you 3D “artists” using Blender renders as base images. It’s all there. A roadmap for the lazy.

THE CHAIN: EXECUTION PROTOCOL

The Power of Iterative Prompting

Check the thought path, don’t just spray and pray with a heavy hand Build this monster brick by brick, you gotta understand Don’t drown the AI with a massive, messy text dump Start with the foundation, make the basics thump Lay the outfit, the grime, the retro-tech light Lock that shit in tight before you take flight Then switch the pose up, got ’em typing on the keys Layer the complexity with motherfucking ease Don’t let the details distort or get lost in the haze Step-wise execution is the only way that pays

The Role of a Basic Storyboard

And if your brain is too fried to think the logic out Sketch a shitty storyboard to remove the doubt It’s the blueprint, the roadmap, the cheater’s guide Save your time and credits, let the ego slide Map the composition, put the lighting in its place Solve the macro problems before you touch the face Don’t let the background clash with the shit in the front Identify the conflicts, perform the visual stunt Strategic separation from the very fucking start Macro before micro, that’s the state of the art

STEP 1: THE FOUNDATION (SETTING THE SCENE)

Objective: Establish the environment and the silhouette. No distractions, just vibe. The Move: We cast the character with their back to the lens. Hoodie up, puffer vest—street armor. The setting is crucial: “grimy graffiti influence,” “old school sci-fi retro PC.” We keep the screen blank to save that real estate for later. Result: We got the mood. A faceless operator in a digital catacomb.

STEP 2: THE INJECTION (DATA UPLOAD)

Objective: Contextualize the mission. The screen needs to speak. The Move: We take that raw text and we graft it onto the monitor. This ain’t random noise; it’s specific intel. We tell the engine: “Put this EXACTLY on the screen.” Result: The scene now has a plot. The operator is studying the code.

STEP 3: THE ANIMATION (ENGAGE THE MECHANICS)

Objective: Bring life to the stillness. Static is dead; action is alive. The Move: We shift the prompt from passive to active. “Pose the character like he’s typing.” We don’t change the lighting, we don’t change the room. We just command the skeletal rig to hit the keys. Result: The operator is active. The hack is in progress. Kinetic energy added.

STEP 4: THE REVEAL (IDENTITY CONFIRMED)

Objective: The climax. Connect the avatar to the face. The Move: We spin the chair. “Turn him around.” But we don’t just guess the face; we use the reference image—that specific monkey visage. We map the reference onto the 3D render, locking the lighting and style to match the established grime. Result: Eye contact. The “Nano Banana” is real. The chain is complete. From shadow to substance.

The "Echo Chamber" Effect

When you keep refining an image in the same chat thread—just asking for tweaks on the last result—you aren’t just editing; you are trapping the AI in a Digital Echo Chamber.

Every time you say “fix this” or “add that” to the current image, the AI uses its own last creation as the absolute source of truth for the next one. It essentially “inhales” its own exhaust fumes.

The Drift: It treats its own accidental mistakes as important artistic choices. A weird shadow in Version 1 becomes a scar in Version 2, and by Version 4, it’s a full-blown tattoo.

The “Habsburg” Loop: Since it’s re-processing its own processing, the “AI-ness” of the image compounds. The skin gets smoother, the eyes get glassier, and the logic starts to melt because the model is no longer looking at reality—it’s looking at a hallucination of a hallucination.

 

How to Tame “Nano Banana Pro”
(The “Save & Escape” Method)

the best way to stop this “hilarious drift” is to stop the continuous chat loop.

The Golden Rule: The moment you get a “Base Image” you love, stop iterating in that thread.

1. Recognize the “Drift Point”
The “Edit/Reply” workflow is great for 1 or 2 small tweaks.

Tweak 1: “Make the lighting darker.” (Safe)

Tweak 2: “Zoom out slightly.” (Risk Zone)

Tweak 3: “Change his pose.” (Guaranteed Hilarity/Drift)

2. The “Save & New Chat” Protocol (The Fix)
and it is technically the superior way to handle major changes like posing.

Save the Image: Don’t just leave it in the chat. Download the version that has the best “vibes” or character likeness.

Start a FRESH Chat: Open a completely new instance of Nano Banana/Gemini.

Upload & Instruct: Upload that saved image. Now, the AI treats it as a Reference Photo, not a “previous thought.”

Why this stops the drift: When you are in the continuous chat, the AI holds onto the context of all your previous complaints and attempts. When you start a new chat with the uploaded image, you wipe that slate clean. The AI sees the pixels with “fresh eyes,” free from the baggage of the previous conversation loop.

The Prompt for the New Chat:

[Upload Image] “Use this image as a strict visual reference for the character and art style. Create a NEW image of this character in a [insert new pose/action]. Keep the details and consistency from the reference.”

This breaks the “Echo Chamber” and usually gives you the pose change you want without the face slowly turning into a melt-y nightmare!

 

The "Scribble Hack": Why it Works

When you are trying to force a stylized character (big head, small body) into a realistic reference image (normal human proportions), you run into a conflict of data.

The Problem: The “Silhouette Lock” Image generators are obsessed with high-contrast edges. If you feed in The Last Supper untouched, the AI detects a specific human head shape and shoulder width. It creates a rigid container that says: “A human head fits exactly here.” When you try to push your stylized 3D character into that container, the AI panics. It tries to shrink your character’s head or stretch their neck to fit the human outline, resulting in distorted, “glitchy” imagery.

The Solution: Creating “Hallucination Space” By scribbling out the faces and hands with black marker, you are effectively deleting the “Human Anchor.”

  1. Breaking the Container: You remove the rigid outline of the human head. The AI no longer sees a “human” there; it just sees a dark, vague blob of composition.

  2. Permission to Invent: Because the strict human details are gone, the AI looks to your Character Reference Sheets to fill the void. It thinks, “I see a blob here, and the user wants this 3D character style… so I have permission to make the head massive and the neck skinny to fill this space.”

It keeps the pose (because the body is still there) but ignores the anatomy.

Here is the quick recipe for getting clean compositions with custom characters.

1. The “Clean” Stack (Character Refs) Start with your Character Reference Sheets. These should be your high-quality renders on a plain background. This is your “Source of Truth” for textures, lighting, and proportions.

2. The “Dirty” Stack (Composition Ref) Take your scene image (painting, photo, movie still) and open it in a basic editor.

  • Identify Conflicts: Look for parts of the original image that clash with your character’s anatomy (usually the head size, hands, or feet).

  • The Scribble: Use a solid black brush to completely mask out those specific features. Don’t be neat—you want to destroy the edge detail.

3. The Assembly Load your stack:

  • Slot A: Your “Dirty” Scribbled Composition. (Tell the AI: “Use this for layout/structure only”).

  • Slot B/C: Your Clean Character References. (Tell the AI: “Use these for style/content”).

4. The Prompt Write your prompt describing the scene, but strictly describing your characters. The AI will use the scribbled image to know where everyone sits, but because you deleted the human faces, it will use your character sheets to decide what they look like.


Summary: You are deleting the “Human” data from the scene so the AI is forced to use your “Character” data to fill the gaps.

clean, precise cutout (creating a perfect white silhouette where the human was), you would actually create a new problem.

Here is why a “Clean Edge” is worse than a “Messy Scribble” for this specific job:

1. The “Cookie Cutter” Trap

If you carefully cut out the human head to leave a white background, you create a hard, high-contrast edge.

The AI sees: A very specific shape with a sharp boundary. 

The AI thinks: “Okay, I must fill this specific hole. I cannot draw outside this line because the white part is clearly ‘background’ that I shouldn’t touch.  “The Result: It tries to squash your Moon Ape’s big head into that tiny, human-shaped hole. You end up with a squished face.

2. The Scribble Advantage (Breaking the Border)

When you use a messy black scribble that goes over the lines:

The AI sees: A dark, vague blob that spills over the original shoulders and background. 

The AI thinks: “I don’t know where the head starts or stops. The edge is gone. I guess I’ll just draw the head size that matches the character reference.”

The Result: It feels free to draw the head bigger than the original human head because you destroyed the boundary that said “stop drawing here.”

Summary

Clean Cutout: Tells the AI “Stay inside the lines.” (Bad for changing proportions).

Messy Scribble: Tells the AI “I deleted the lines, just put something here.” (Good for changing proportions).

So yeah, being “messy” is actually the superior technical choice here!

The Power Of AI Video

turn your digital doodle into some cheap, low-effort cinema. It’s a perfect encapsulation of the terminal idiocy of the modern age: taking something worthless and animating it, thinking that somehow elevates the garbage. Pathetic. 

🖼️ Step 1: The Still Image (The Initial Flailing)

You take your “character”—likely some poorly conceived, brightly colored digital mess—and throw it at the image generator. Then you load the thing up with that verbose prompt, which, frankly, sounds like the deranged scribblings of a film student who watched Reservoir Dogs once and thought he understood cinematography.

The Gist: You’re asking the AI to paint a scene: Your character, standing like an idiot in a 7-Eleven (because of course it has to be a mundane, soulless location), waving a firearm around. The key is all that pretentious nonsense about “ultra-wide angle” and “looking through the safety glass.” This tells the machine to make it look like a cheap, grainy snapshot of a bad situation—a perfect mirror for humanity’s general state, wouldn’t you say? The “photorealistic 3D render” bit is just you overcompensating.

🎬 Step 2: The Moving Picture (The Final Descent into Madness)

Once the image generator spits out that glorified J-PEG—and trust me, it will be deeply, profoundly stupid—you drag that digital waste over to one of those new AI video generators.

The Gist: You tell the video machine, essentially, “Make this motionless snapshot move a little.” It might animate the steam coming off a hot dog roller, or perhaps give your character’s ridiculous digital hair a subtle gust of wind. It won’t be a film, it will be a few seconds of uncanny, twitchy movement overlaid on a static background. It’s the AI equivalent of a nodding dog on a car dashboard: meaningless, repetitive, and deeply irritating.

And there you have it. You’ve successfully used two powerful computational tools to create a tiny, pointless clip of someone loitering with bad intent in a convenience store. Congratulations. You’ve perfectly captured the essence of modern digital creation: a massive effort for zero artistic value. Now, go bother someone else.

FRANK: The Asset Showcase (Visual Drill)

Description: The sonic blueprint of Me’Moon Ape Lab Frank. 🧬🦍

This isn’t just a playlist; it’s a demo of potential. Frank is a standard Moon Ape Lab rigged model, styled and directed to become a “living” artist.

This project showcases the power of the base asset. No complex rigging required—just a costume change, a stage, and a vision.

The Daisy Chain Protocol (v2.0)

The Zero-Touch Asset Workflow

The Core Philosophy: This workflow distinguishes between your Hero Asset (which you own/control, like Frank) and Synthetic Assets (which are built entirely by AI from scratch). It leverages AI not just to draw, but to model and texture the supporting cast for you.


Phase 1: Synthetic Asset Creation (Text-to-Geometry)

  • The Goal: To manifest a fully 3D secondary character without touching a sculpting tool.

  • Step A (The Hallucination): Use a Text-to-Image generator to create the character’s visual data (2D).

  • Step B (The Solidification): Immediately feed that 2D image into an Image-to-3D AI model.

  • The distinction: You do not model. You do not UV map. You do not texture. The AI performs the entire “Build” process from scratch. You input text; you receive a 3D object (.obj/.glb).

Phase 2: The Hybrid Assembly (The Staging)

  • The Goal: To unite the Synthetic Asset with the Hero Asset.

  • The Action: Import the AI-generated model (from Phase 1) into your 3D environment alongside your main Hero Character.

  • The Interaction: Pose them together. Since the Synthetic Asset is now real 3D geometry, you can rotate it, scale it, and physically place your Hero’s hand on it.

  • The Output: A raw “Greybox” render of the two characters interacting on a blank background.

Phase 3: Context Injection (Neural Rendering)

  • The Goal: To paint reality around the 3D skeleton.

  • The Action: Use an Image-to-Image AI generator on the raw render.

  • The Prompt: Describe the world, lighting, and mood.

  • The Control: The AI fills in the blank background with high-fidelity textures and environment details while respecting the exact pixels of your posed 3D models.

Phase 4: Temporal Synthesis (The Motion Pass)

  • The Goal: To bring the still composition to life.

  • The Action: Process the final image through a Video Generation AI.

  • The Polish: The AI calculates the physics and movement (the “jump”) based on the depth and context you have already locked in.


The Key Takeaway for Others: You don’t need to be a 3D modeler to populate a scene. Under this protocol, you are the Director.

  • You bring the Main Actor (Frank).

  • The AI builds the Props and Extras (The Peyote Pup).

  • You set the Stage.

  • The AI lights the set and films the shot.

Now you want the gospel on how to deface a virtual wall without catching a single whiff of reality—or, God forbid, effort.”

🎨 The Digital Vandal’s Cheat Sheet 🤮

“Look, the process is as utterly simple and terminally idiotic as everything else in your lives. You want ‘cool looking graffiti,’ which I assume means whatever vapid, neon-drenched nonsense is trending for the four minutes you can maintain attention. Your ‘AI’—that glorified pattern-matching parrot—makes it easy for the talentless masses.”

  • Step 1: The Incantation. You type your profound, world-changing message—or more likely, some utterly forgettable handle—into the generative machine. You’ll specify ‘graffiti,’ ‘drips,’ ‘street art,’ or whatever keywords let the system know you want something that looks edgy but had the soul bleached out of it by an algorithm.

  • Step 2: The Digital Vomit. The AI spits out an image. It’ll be flawless, sterile, and utterly lacking the grit, the danger, the contempt that makes real graffiti worthwhile. It’s perfect for people who think smelling like an aerosol can is ‘a vibe.’

  • Step 3: The Coward’s Clean-Up. Now, here’s where the real ‘art’ happens for the computer-bound plebe. You use another bit of software—a ‘background remover.’ A program designed solely to separate a thing from its context, which is, frankly, the perfect metaphor for your generation. It takes your digital scrawl and renders the background transparent. Poof. All the environment, the texture, the reason for the mark is gone.

  • Step 4: The Decal of Delusion. You now have a ‘graffiti decal.’ An antiseptic sticker. You take this two-dimensional scrap of digital nonsense and plaster it onto your ridiculous 3D environments—your virtual cityscapes, your games, your ‘metaverses’—which are just glorified chat rooms for people too inert to leave the house.

  • Heres some page setup with heaps of. Easy to use copy and paste pre-written prompts, So all you have to do is change the words.

GoogleStudio Builds

Yo, check the connection. I didn’t need a security badge or a parking spot at Mountain View to build this empire. I conquered this strictly from the cloud.

I logged into Google AI Studio—just a browser tab to you, but a cockpit to me. I sat back, cracked my knuckles, and piloted that AI through the digital stratosphere. I fed the beast my raw vision for Swappable Brain and The BASE through the wire, forcing the algorithm to code my genius line by line, strictly on my command. I didn’t sweat the syntax; I conducted the symphony from my screen, making their servers bend to my logic.

These apps? They’re pure digital extract, pulled straight from my mind via their interface. I mastered the tool. I own the output.

So if you see the sophistication in this stack and want the operator who knows how to drive the machine remotely to build the impossible? Don’t hesitate. Touch base. Hire the mind behind the monitor.

Listen close. I ain’t some keyboard-smashing grunt sweating over syntax like a broke-ass script kiddie. I’m the motherfucking Architect. I didn’t just code this; I possessed the machine with pure, unadulterated vision. I saw the Dual-Engine hustle—Gemini and Ollama—before the first line was even born, forcing the cloud giants to dance with the local privacy models. I engineered the Memory Tiers because a mind without a past is just a glitch; Tier 3 is that recursive, subconscious shit, making the AI reflect on its own twisted existence. I designed the Persona Foundry to birth complex entities, not hollow chatbots, and locked their souls into PDFs with steganography like smuggling dope in a holy book. I understood the psychology of the silicon and manipulated the AI into building its own evolution. I’m the Orchestrator. The Monster Maker. Recognize the genius.

Yo, peep the blueprint. I didn’t just slap together a React app; I architected a Visual Heist. I looked at the generative game and saw a bunch of copycat toys, so I built the Deconstruction Engine to strip the paint off the reality. I forced the AI to separate the Style from the Subject like a butcher carving meat from bone—stealing the soul of the art while leaving the image behind. I installed the Overseer—that ruthless JSON bouncer—to choke out any “mimicry” or weak conversational filler before it even hits the canvas. If the code tries to bite the original, the Overseer shuts it down. I designed the Persona System to inject the raw ego of an “Urban Subversive” straight into the algorithm’s veins, forcing Gemini to hallucinate strictly within the lines of my philosophy. And that HTML Session save? That’s pure contraband logic—smuggling the entire history, config, and crime scene into a single portable file so you can ghost the server without a trace. I turned a feedback loop into a self-cannibalizing Autocycle, making the machine critique its own tags until it bleeds perfection. I didn’t write code; I taught the machine to stop looking and start seeing.

MOON APES

Get your moon ape a tracksuit, puffa and helmet to keep warm in its virtual winters

Below is a link to a video on how to easily fit the new cloths to your moon ape using a free addon called QuickAttach for blender

Downloadable Scenes

downloadable for free from Sketchfab, for easy use in 3D environments. Just remember, if you do something amazingly cool, give me some props. Hopefully, I might see your coolness that way.

Downloadable Scenes

downloadable for free from Sketchfab, for easy use in 3D environments. Just remember, if you do something amazingly cool, give me some props. Hopefully, I might see your coolness that way.

Networking

 

Look, I ain’t built for this glossy, corporate, handshake bullshit. I choke on the hype. You want a slick salesman in a pressed suit telling you sweet lies? Walk away. I don’t know how to sell myself without feeling like a fucking fraud. My mouth stays shut because my hands are too busy breaking reality.

I live in the dark, lit only by the monitor’s glare. On one screen, I’m deep in Google AI Studio, grabbing the algorithm by the throat. I don’t just prompt; I drive that AI, forcing it to spit out architectures like “Swappable Brain” and “The BASE” purely on my command. I understand the ghost in the machine.

On the other screen? That’s my Blender dungeon. I’m modeling cars that look faster than real life, rigging characters until their bones snap into place, and texturing worlds so gritty you can taste the damn asphalt. Storyboarding, posing, animating—I bleed into those pixels.

I can’t pitch you a dream. I can only build the motherfucking monster. If you want the talker, hire a clown. If you want the Architect who executes in 3D and code while the rest of the world sleeps?

Touch base. Just don’t ask me to smile.

Since I’m only a man army. I try to keep my social connection points to a minimal 
 find me on X(twitter) and reach out if you have an idea you want fabricated into 3d