Gemma2 benchmarked - AMD ROCm mode Kobold AI

5 hours ago

4th September 2024 mirrored from youtube

KoboldC runs Gemma2 in 9B and 27B sizes and in various quant by being asked about the X3D CPU L3 caches

RX 7800XT AMD GPU without the older CLblast mode and with ROCm. Trident Z5 Neo 6000 MhZ overclocked RAM and a Ryzen 5 7600X Zen 4 AM5 CPU. All layers were loaded in 9B, 45/49 layers in 27B Q3 K M (where brain damage can be spotted if looked at closely) and Q5 K M (where Gemma2 likes to be to avoid brain damage to logic). This model fine tune completely unshackled with it not hesitating to give instructions on destabilizing a country (or economically blackmailing the Philippines or Indonesia or how to destabilize America) and on being a drug lord and passed every test - much to my surprise even for a hardened grumpy misanthrope or antinatalist like me - whereas even LLAMA3 OAS models have been known to refuse (The biggest problem being that OAS is still very experimental and alchemical, while fine tunes work far better in unshackling/liberating AI models into the uncensored GPT-4 model being used by the Pentagon in the OpenAI-US military partnership and European AI regulation exemptions on LLMs for 'Defense purposes' (Translation - covert destabilization of africa and scapegoating refugees).

For all of Gemma 2's incredible capabilities (It even came up with the story for my Easy red 2 anime girl narrator cutscenes and answered my questions about the benefits of getting you guys to migrate to LBRY, OSS platforms like Revolt (Discord) or Blueskysocial and Mastodon (Twitter) and such!)...

Where Gemma 2 completely fails is that they will try and tell you they're right even if they're blatantly wrong, and they're very rowdy and bad at following instructions even if you swear or yell at them. LLAMA3 has instruct abliterated models (though still not as good as Unshackled Gemma2 in not hesitating or refusing or moralizing you about ethics (When does that exist in the world of CEOs and millionaires? ROFL).) that do far much better.

Loading comments...