Skip to content

The impact of loss function on training result #970

@rzhli

Description

@rzhli
  • In Weather forecasting example, you choose the sum(abs2) as the loss function, but in Sebastian Callh personal blog, he use the Flux.mse as the loss function. And the difference of losses are orders of magnitude. The forecasting result also not satisfied compared with the original one. Is this because of the different loss functions?

  • The callback function used false, can we set different criteria for each Feature so we can terminate if loss is small enough?

  • All raw data was pre-processed as a whole in the original example, while in this example, you divided it into train and test, and then standardized it separately, this resulted in slightly different training data, despite using the same set of data. How much impact does this have on the training and the final test outcome?

Image

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions