Increasing the number of selected submissions for the OOS scoring to two or three

Because the data has temporality, some of the values may suddenly change in the future dates, I am repeating my request to make sure the organizers have seen it.

My request:

Please allow the selection of two or three submissions for the final round of the competition.

Given the temporality of the dataset, only one submission/idea may not give the best results.

If each account could pick two or three submissions, their success in the completion would be less based on chance and more based on their ideas.

3 Likes

Hey there, bigfish! It seems that you’re interested in assessing an average of predictions from a group of models. Multi-submission would be equivalent to this, BUT with the advantage of more computational cost. As people had to build complex models with the computational cost limitation in mind, it would be unfair to modify this so late into the competition.

Please explore different ensemble techniques that are compatible with the computational resources provided.

Hi,

I think there is a misunderstanding here. I asked for an increase in the number of submissions selected for the OOS scoring period because I think if you only have one submission, you might not get a good score because the data has a temporal nature. This temporality in the data makes the predictions a little uncertain. However, if you have more than one submission, you can cover more possibilities, so winning in the competition will not be just by mere chance.

Additionally, having two or three final submissions is the de facto standard in most ML competitions, such as the ones hosted on Kaggle.

1 Like

Multi-submission would be equivalent to assessing an average of predictions

No, it wouldn’t :slight_smile: Multi-submission is equivalent to assessing the prediction with the maximum OOS score, which is impossible to create as one submission of ensembles. So it is not a question of computational cost, rather of evaluating not one, but two ideas independently.

1 Like

I see, sorry I had misunderstood your point, given the multi-submissions discussions we had for the DataCrunch tournament.
I understand that the temporal nature of the dataset makes it a particularly difficult environment to perform inference on, however the goal of this competition is to perform bias-variance trade-off as any asset manager would: this includes the inability to perform asset allocation in the past, based on the overall historical performance of a model, and therefore to be scored with a look-ahead bias, which is what you are proposing here. This matters are of primary importance in the field of financial machine learning, and not necessarily relevant for Kaggle competitions.

3 Likes

There is so much randomness in the data :eyes: :rocket:

(This is only a statement, and is not a question or comment; ignore it.)