Skip to content Skip to sidebar Skip to footer

Why Is History Storing Auc And Val_auc With Incrementing Integers (auc_2, Auc_4, ...)?

I am beginner with keras and today I bumped into this sort of issue I don't know how to handle. The values for auc and val_auc are being stored in history with the first even integ

Solution 1:

Solution 2:

In this line of code:

for train_index, valid_index in skf.split(np.zeros(n_sample), df[['target']]):

What is actually happening is that you are running multiple training instances, in principle 5 as defaulted by sklearn.

Although you get different training and validation sets in :

 x_train, y_train, x_valid, y_valid = get_train_valid_dataset(keyword, df, train_index, valid_index)

When your run model.fit(),

history = model.fit(
            x = x_train,
            y = y_train,
            validation_data = (x_valid, y_valid),
            epochs = epochs,
            callbacks=create_callbacks(keyword + '_' + model_name, SAVE_PATH, folder)
        )

You can see that the parameters for create_callbacks are static and do not change from one training instance to another. Keyword, model_name, SAVE_PATH and folder are arguments that remain constant during the 5 instances of your training.

Therefore, in TensorBoard, all the results at written at the same path.

You do not want to do that, you want each iteration to have its result written at different paths.

You have to change the logdir parameter, give it a unique identifier. In that situation, each training iteration will have written its results in separate locations, and thus the confusion will disappear.

Solution 3:

I solved the issue by changing to tensorflow==2.1.0. Hope it can help anybody else.

Post a Comment for "Why Is History Storing Auc And Val_auc With Incrementing Integers (auc_2, Auc_4, ...)?"