Apache Hadoop Ozone is a Hadoop subproject. It depends on the released Hadoop 3.2. But as Hadoop 3.2 is very rare in production, older versions should be supported to make it possible to work together with Spark, Hive, HBase and older clusters.
We have two separated worlds: the client and the server side. The server can have any kind of dependencies as the classloaders of the server are usually separated from the client (different JVM, service, etc.). But the Ozone Client might be used from an environment where a specific Hadoop instance is already available:
This is the happy scenario where we have the same Hadoop version both on server and client side.
The problem starts when the Ozone File system is used from an environment where the Hadoop version is different.
Let’s look at the used classes and dependencies between classes on the client side:
Hadoop classes are marked with Orange, Ozone classes are white.
The problem here clearly visible: to the Configuration
we have multiple dependency path. With using the Same Hadoop everywhere, it’s not a problem, but using different Hadoop version makes it difficult:
Here the red and orange classes represent different Hadoop versions. Which version should be used from Configuration
(or any other shared Hadoop classes)? If we use it from the older version, the Ozone part can’t work (as it depends on newer features). If we use a newer version it generates conflicts (as the classes are not always backward compatible).
Shading is the packaging of multiple jar files to one jar with moving all the classes files to one jar file. During the move it’s also possible to modify the bytecode and rename some of the packages (relocation). Usually this relocation is called shading.
Unfortunately it’s not working in our case.
If we don’t shade Configuration
, the OzoneConfiguration
class can’t use new 3.2 features from it.
If we shade (package relocate) the Configuration
, the OzoneFileSystem
will not implement the FileSystem
interface any more as the FileSystem
class requires a method public void initialize(URI name, Configuration conf)
but our specific OzoneFileSystem
will provide a initialize(URI name, org.apache.hadoop.ozone.shaded.Configuration conf)
which is clearly not the same. OzoneFileSystem
won’t be a FileSystem
any more.
To solve this problem Ozone started to use a specific classloader. With using multiple classloaders, you might have different versions from the same classes (It means different class definitions not different instances). The only question is the definition of the boundaries between the two classloaders:
With two different classloaders we can have two Configuration
classes without any problem. The only dangerous area here is the usage of the two type of Configuration
classes at the same place.
For example if the OzoneClient
returns a Configuration (loaded by isoloated)
or Path (loaded by isolated)
it generates very strange errors as the Configuration (loaded by isolated)
won’t be instance of Configuration (loaded by app classloader)
.
Therefore we should follow the following rules:
OzoneClient
.To achieve the first rule, instead of refactoring the OzoneClient
we introduced a new interface only the minimal set of the required functions:
(called OzoneClientAdapter
):
public interface OzoneClientAdapter {
InputStream readFile(String key) throws IOException;
OzoneFSOutputStream createFile(String key, boolean overWrite,
boolean recursive) throws IOException;
void renameKey(String key, String newKeyName) throws IOException;
boolean createDirectory(String keyName) throws IOException;
boolean deleteObject(String keyName);
Iterator<BasicKeyInfo> listKeys(String pathKey);
....
Usage of this adapter helps us to minimize the used classes which cross the boundaries.
Here the OzoneFileSystem(#app cl)
uses a reference to an interface (OzoneClientAdapter(#app cl)
) implementation. We don’t need to know anything about the implementation as we use only it via the interface, but under the hood the implementation is created by the specific (“isolated”) classloader.
Obviously, if something is used in the OzoneClientAdapter
we should load it by the app classloader (and we need to have just one instance from all the used classes).
NOTE
Classloader isolation works by ensuring that all dependent classes of the ClientAdapter
are transitively loaded using the isolated classloader. This is because the ClientAdapter was loaded using the custom classloader.
As we saw earlier we need a separated classloader:
Configuration
) independent from the app classloader.Path
or BasicKeyInfo
which are used in the methods of OzoneClientAdapter
interface) from the app classloader.In Java it’s fairly easy to use a specific classloader as the java.net.URLClassloader
very generic and usable. But in Java if a class is loaded by the parent classloader, it won’t be loaded by the child classloader.
For example, if you create a new ClassLoader isolated = new URLClassLoader(appCl)
, it will always use a class loaded by the parent (appCl
) classloader, if the specific class is available from there. If you creates a classloader with a parent and the parent already loaded a class (like Configuration
) it will use the shared, alread-loaded instance from the class definition.
But as we described earlier we need something different: some of the classes can be shared, some of the classes should be isolated:
Here the Hadoop 3.2 classes are isolated but some of the key classes (OzoneClientAdater
or the org.apache.hadoop.fs.Path
) should be loaded only once, therefore can be used from both world.
To achieve this (some classes are shared, but not all of them) we started to use a specific classloader FilteredClassLoader
which loads everything from the specific location first and only after from the parent. Except some of the well-defined classes, which are used directly from the parent (app classloader).
With this approach we can support all the old Hadoop versions except when security is required.
Hadoop security is based on the famous UserGroupInformation
class (aka. UGI
) which contains a field with javax.security.auth.Subject
type.
From high level Subject
works as a thread-local Map
. You can put everything to there in your current thread specific AccessControlContext
and later you can get it.
Hadoop adds one org.apache.hadoop.security.Credentials
instance to this thread-local map, which can contain tokens or other credentials.
Subject
is a java class, it’s easy to share between different classloaders (by default core Java classes are shared), but Credentials
is not shared and has a lot of other dependencies (It depends on Token
, which depends on the whole world…)
With Hadoop security we need to extract the current UGI information before calling the methods of OzoneClientAdapter
, propagate the identification information in a safe way, and inject it back to the UGI of the isloated classloader to use it during the RPC calls.
NOTE
There will be two different versions of the Credentials
and Token
classes, and also their dependencies. One loaded by the default classloader and one by the isolated classloader. They are not seen as the same class by the Java runtime. When the Ozone client attempts to pass the Token generated using the default classloader to the RPC client, it causes strange runtime errors (Marton: can you add the exception message here?)
For example
Token
(==TokenIdentifier
) to a byte array[]byte
as a required parameter to all the method of OzoneClientAdapter
Token
and TokenIdentifier
inside the OzoneClientAdapterImpl
(implementation, loaded by the isolated classloader) and inject it back to that UGI.There are three main type of authentication information which needs to be propagated in this way.
kinit
. That should be easy as it’s based on the reading of some session files from /tmp/...
which is accessible by both the UGIs (app and isolated)Token
s, but it’s not yet proven and can be tricky (especialy with all the expiry and reissue logic.)With isolated classloader we can support older Hadoop versions but we introduced signification complexity. This complexitiy will be increased when we implement the UGI information propagation (if it’s possible at all).
Let’s try to go one step back and try to achieve the same goals (support old Hadoop versions) in a different way (And thanks to Anu Engineer who forced to check this option again and again).
Let’s talk about the current project hierarchy (for the sake of the simplicity only the HDDS projects will be shown, but the same hierarchy is true for Ozone projects):
Here the boxes represent maven projects and the arrows represent project dependencies. Dependencies are transitive therefore all the project depends on Hadoop 3.2.
(Note: in fact we also depend on hdfs-server
and hdfs-client
but those dependencies can be removed and doesn’t modify the big picture. Orange box is the Hadoop dependency.)
What we need is something similar to the Spark where we have multiple clients with different Hadoop versions:
But as we have transitive dependencies, it means that the hdds-common
projects should be 100% hadoop-free:
But it means that all of the Hadoop dependent classes should be replicated somehow (which is reasonable, as according to our original problem, they are not always backward compatible). And the clear interface between server and client will be based on proto
files. It’s the client responsibility to create the binary message on the client side (with Hadoop 2.7 or Hadoop 3.2). The Hadoop RPC is backward compatible on binary level.
This is only possible if the majority of the common classes are Hadoop independent and won’t be duplicated: only a reasonable amount of code will be cloned and maintained at multiple places.
Based on the early investigation it’s the case, as the majority of the hdds-common is Hadoop free and it’s not impossible to fix the remaining problems:
framework
project (if it’s required only from the server side, like the Certifcate handling)string2bytes
instead of using one from the DfsUtils
)hdds-common
. We should share only the proto
files and generate the Java files in different ways. Which means that some of the serialization/deserialization logic might be duplicated as they should be removed from common.hdds-common
). As of now micrometer seems to be the best choice as it supports tags on specific metrics. (Note: Hadoop metrics also can be forked but it has a huge number of dependencies).OzoneConfiguration
everywhere we will use a Configurable
interface which can be implemented by OzoneConfiguration
or any other adapter which is compatible with older Hadoop versions.The final solution might require bigger refactor: In the current phase I suggest to cleanup the current project: remove dumb dependencies, try to organize the code better. There are many small tasks which can be done (eg. remove dependency on hdfs-client
or move server specific shared code out from the client side: move it from common
to framework
).
If the cleanup tasks are done, we will clearly see how this second approach is possible. Worst case we will have a more clean project.