In the following notebook you can find a simple application of decision trees in Python (with sklearn).
I found a very clean data set on Kaggle, perfect for classification. It contains some features of different animals (such as do they have fur or feathers, do they have a tail, are they airborne or aquatic...) and the goal is to classify to which class (mammal, fish, bird etc...) the animal belongs to . After fitting the model, I also visualised the resulting tree and created a plot to check the relation between accuracy of the predictions made by the model for the training and test set respectively and the depth of the tree. The model performs very well on both training and test data and doesn't seem to suffer a lot from overfitting.
Additionally, to compare the algorithm used by sklearn and the ID3 algorithm we saw in class, I used the code that Riccardo Cappi uploaded to this forum last week. As expected, this algorithm yields a different tree, since it tries to maximise information gain at each node, instead of minimising impurity like the algorithm from sklearn.
You can find the notebook here.