Lil Tjay


From the creator

"50 sec of data trained with my pretrain for test"


Introducing "Lil Tjay (RMVPE 95 EPOCHS)", a meticulously trained voice model powered by advanced AI technologies, and crafted by Weights. This model embodies an immersive blend of AI Music and RVC Model attributes developed from a remarkable, limited capacity of 50 seconds data train, simplistic yet superior in relation to many large-scale data sets. Perfect for creating distinct and intricate voiceover Arts in the booming AI music industry, this state-of-the-art RVC model can beautifully reframe Lil Tjay's unique vocal tones and styles. A significant stride in the AI music world, this model is primed to leap boundaries in the generation of AI Covers, adding a whole new dimension to text-to-speech technology. Delve into the world of AI Covers and text-to-speech applications by leveraging our user-friendly, and free AI tools. Embark on your journey into unchartered musical territories with Weights β€” where technology and innovation harmoniously unite.

See more


This model failed processing - generated sample are not available


Can you tell us what pretrain you used?

Some sort of studio session-based pretrain?

i can send u it for testing but im still training it

ye its mainly for singers

Tell me all your details via DM then.

how long is the dataset?


for the pretrain

im making a pretrian myself made for expiremental artist rn

gunna be huge and definitely revouloutinary for people who want multiple tones and stuff

different mic quality and things

but I still have 10 sessions to add

just on tjay or multiple artist?

multiple but pretrain its mainly for singers

its made from Ariana grande Dua lipa Charlie Puth Joji Freddie Mercury Michael Jackson sessions stems and multitracks

i did something like that before but i accidentally deleted it due to the hard drive clearing incident. Good to know i dont have to remake it lol

damn im making my own purely on extracted vocals cleanest only tho . yeat / ken carson / playboi carti / kayne and stuff

rlly gunna be huge

plan on trying 2-3hrs of dataset first

then continue to make it if its good


I really suggest you to not use isolated vocals.

only cleanest thats why im saying mines expiremental

it should work because what do we mainly use to train ower own ai models?

extracted / cleaned datasets

might help who knows

You know if you use isolated vocals you must remove harmonies and backing vocals right?

ofc im not gunna use any of those

Also instead i suggest using studio sessions

could i send you a piece of the dataset in dms on how clean it is?

just sent actually

oh yea and bro @sztefXplayWithRukiaSkibidi

did you fine tuned yours or did you make it from scratch

the pretrain ?


by ov2

finetuning a 40h dataset on a finetuned pretrain sounds like a recipe for overfitting lol. Thats enough to make a full on pretrain but no gpu resources i guess


so would you say making it from scratch better?

because its basically merging right

because ur using the dataset from ov2 to combine with urs (my dataset)

to get a result ?

only if you have like more than 20 hours

10 hours also works. Also if your fine with training for a long time but it is better overtime

so i am making a pretrain (finetuned) on isolated vocals

is that a good idea?

making sure it doesnt have any background adlibs or doubles ofc

and im using the latest method which is bs reformor + removing reverb / echo / denoising (then normalizing and adding some things in audacity)

i could send a sample of the dataset

which ill do actually rq

i make one off sub pretrains using similar vocals just for a single model so it would probably work. As for if it would sound good as a standalone pretrain lol

no instrumental phase inversion?

like no inst bleeds? or wdym

you use the official instrumental of the song and phase invert it with the original. Its a pretty well known and easy technique. Does ai hub not really have a guide for that. Theres youtube tutorials if there isnt

@sztefXplayWithRukiaSkibidi one lass question

how long did it take per epoch gang ?

wit tha 40hrs 😭

guessing range 30-1hr per epoch ?

or longer

On 4080 it would take around ~45 min per epoch

imma be using a google colab to train my pretrain

so would it be the same speed?

since i am going to be paying

for the premuim

What gpus are there on Collab pro?

Afair a P100?

With 16 gb and ~3500 cuda cores?

Not worth its better to rent gpu


Look: I had to use only h100 a100 l40s 4080 4070 (ti) at 100 hrs dataset H100 (80batch) - 1 ep ~1.5 hr A100 (80batch) - 1 ep ~2 hrs L40s (48batch) - 1ep ~2 hrs Rtx 4080 (16batch) 1ep ~3 hrs Rtx 4070 ti (12batch) 1ep ~3 5 hrs Rtx 4070 (12batch) 1ep ~3 8 hrs


a100 on paperspace is good

It's fine


But i prefer l40s more

It has way more cuda

for the price you get a month of it though if you have more to spend then theres better options

T4 gpu they have more types

kinda forgot since I’m not home rn

I think they got the 4080

Or an even better honestly hold up

Why you say that bro?

Isn’t google colab good?

you wont be able to train the entire pretrain you have 100 units and it will probably take 10 15 per hour

Add a comment