Description
50 sec of data trained with my pretrain for test
Comments
https://huggingface.co/Sztef/Teemo_Omega_Squad1/resolve/main/LilTjayBySztef.zip?download=true 50 sec of data trained with my pretrain for test 😛
Can you tell us what pretrain you used?
Some sort of studio session-based pretrain?
i can send u it for testing but im still training it
ye its mainly for singers
Tell me all your details via DM then.
alr
how long is the dataset?
@sztefXplayWithRukiaSkibidi
for the pretrain
im making a pretrian myself made for expiremental artist rn
gunna be huge and definitely revouloutinary for people who want multiple tones and stuff
different mic quality and things
40h
but I still have 10 sessions to add
just on tjay or multiple artist?
multiple but pretrain its mainly for singers
its made from Ariana grande, Dua lipa , Charlie Puth, Joji, Freddie Mercury, Michael Jackson sessions , stems and multitracks
i did something like that before but i accidentally deleted it due to the hard drive clearing incident. Good to know i dont have to remake it lol
damn im making my own purely on extracted vocals, cleanest only tho . yeat / ken carson / playboi carti / kayne and stuff
rlly gunna be huge
plan on trying 2-3hrs of dataset first
then continue to make it if its good
😭
I really suggest you to not use isolated vocals.
only cleanest, thats why im saying mines expiremental
it should work because what do we mainly use to train ower own ai models?
extracted / cleaned datasets
might help who knows
You know if you use isolated vocals, you must remove harmonies and backing vocals, right?
ofc im not gunna use any of those
Also, instead i suggest using studio sessions
could i send you a piece of the dataset in dms on how clean it is?
just sent actually
oh yea and bro @sztefXplayWithRukiaSkibidi
did you fine tuned yours or did you make it from scratch
the pretrain ?
finetuned
by ov2
finetuning a 40h dataset on a finetuned pretrain sounds like a recipe for overfitting lol. Thats enough to make a full on pretrain but no gpu resources i guess
:shrug:
so would you say making it from scratch better?
because its basically merging right
because ur using the dataset from ov2 to combine with urs (my dataset)
to get a result ?
only if you have like more than 20 hours
10 hours also works. Also if your fine with training for a long time but it is better overtime
so i am making a pretrain (finetuned) on isolated vocals
is that a good idea?
making sure it doesnt have any background adlibs or doubles ofc
and im using the latest method which is bs reformor + removing reverb / echo / denoising (then normalizing and adding some things in audacity)
i could send a sample of the dataset
which ill do actually rq
i make one off sub pretrains using similar vocals just for a single model so it would probably work. As for if it would sound good as a standalone pretrain lol
no instrumental phase inversion?
like no inst bleeds? or wdym
you use the official instrumental of the song and phase invert it with the original. Its a pretty well known and easy technique. Does ai hub not really have a guide for that. Theres youtube tutorials if there isnt
@sztefXplayWithRukiaSkibidi one lass question
how long did it take per epoch gang ?
wit tha 40hrs 😭
guessing range 30-1hr per epoch ?
or longer
On 4080 it would take around ~45 min per epoch
imma be using a google colab to train my pretrain
so would it be the same speed?
since i am going to be paying
for the premuim
What gpus are there on Collab pro?
Afair a P100?
With 16 gb and ~3500 cuda cores?
Not worth its better to rent gpu
A100
Look: I had to use only h100, a100, l40s, 4080, 4070 (ti) at 100 hrs dataset H100 (80batch) - 1 ep ~1.5 hr A100 (80batch) - 1 ep ~2 hrs L40s (48batch) - 1ep ~2 hrs Rtx 4080 (16batch) 1ep ~3 hrs Rtx 4070 ti (12batch) 1ep ~3,5 hrs Rtx 4070 (12batch) 1ep ~3,8 hrs
:NodokaPeace:
a100 on paperspace is good
It's fine
:MemchoPleased:
But i prefer l40s more
It has way more cuda
for the price you get a month of it though if you have more to spend then theres better options
T4 gpu, they have more types
kinda forgot since I’m not home rn
I think they got the 4080
Or an even better honestly hold up
Why you say that bro?
Isn’t google colab good?
you wont be able to train the entire pretrain you have 100 units and it will probably take 10 , 15 per hour
Add a comment
Samples
This model failed processing - generated sample are not available
More to explore
Loading more