Sklearn uses an optimized version of the CART (Classification and Regression Trees) method to build decision trees and it uses gini impurity as the default insted of entropy as splitting criterion. This method is very similar to the C4.5 and C5.0 methods. The key point is that CART, C4.5 and C5.0 are so similar to each other and just differ in trivial things.
C4.5 method, and consequently also CART and C5.0, is an updated version of the ID3 algorithm and it generates a decision tree for a given dataset by recursively splitting the records.
At the end, you can find a paper in which the author compared ID3, C4.5 and also C5.0. Most of the sections of this paper are something that we have already seen in class.
https://stem.elearning.unipd.it/pluginfile.php/288962/mod_forum/post/4187/Comparative_ID3_C45.pdf