François Chollet+ Your Authors @fchollet Deep learning @google. Creator of Keras, neural networks library. Author of 'Deep Learning with Python'. Opinions are my own. May. 14, 2019 1 min read + Your Authors

A Keras usage pattern that allows for maximum flexibility when defining arbitrary losses and metrics (that don't match the usual signature) is the "endpoint layer" pattern. It works like this:  https://colab.research.google.com/drive/1zzLcJ2A2qofIvv94YJ3axRknlA6cBSIw 

In short, you use `add_loss`/`add_metric` inside an "endpoint layer" that also has access to model targets. The layer then returns the inference-time predictions. You compile without an external "loss" argument, and you fit with a dictionary of data that contains the targets.

Of course logistic regression is a basic case that doesn't actually need this advanced pattern. But endpoint layers will work every time, even when you have losses & metrics that don't match the usual `fn(y_true, y_pred, sampl_weight)` signature that is required in `compile`.


You can follow @fchollet.



Bookmark

____
Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.

Enjoy Threader? Sign up.

Since you’re here...

... we’re asking visitors like you to make a contribution to support this independent project. In these uncertain times, access to information is vital. Threader gets 1,000,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Your financial support will help two developers to keep working on this app. Everyone’s contribution, big or small, is so valuable. Support Threader by becoming premium or by donating on PayPal. Thank you.