30 Kasım 2021 Salı

Kafka Streams API StreamsBuilder Sınıfı

Giriş
Şu satırı dahil ederiz
import org.apache.kafka.streams.StreamsBuilder;
Bu sınıf bir Topology nesnesi yaratır. Topology nesnesi KafkaStreams sınıfını yaratmak için gerekir.
Ayrıca bu sınıf stream() metodu sağlar. Böylece bir topic event'lerini işleyen KStream nesnesi yaratılır.
build metodu
Örnek
Şöyle yaparız
Topology createTopology() {
  StreamsBuilder builder = new StreamsBuilder();
  // Add your streams here.
  TradeStream.build(builder);
  Topology topology = builder.build();
  System.out.println(topology.describe());
  return topology;
}
Yeni stream ekleme şöyledir
public class TradeStream {
  private final static String TRADE_TOPIC = "ARCTYPE.public.trade";

  public static void build(StreamsBuilder builder) {
    Serde<TradeModel> tradeModelSerde = SerdeFactory.createSerdeFor(TradeModel.class,
      true);
    Serde<String> idSerde = Serdes.serdeFrom(new IdSerializer(), new IdDeserializer());

    KStream<String, TradeModel> tradeModelKStream =
      builder.stream(TRADE_TOPIC, Consumed.with(idSerde, tradeModelSerde));

    tradeModelKStream.peek((key, value) -> {
      System.out.println(key.toString());
      System.out.println(value.toString());
    });
    tradeModelKStream.map((id, trade) -> {
      TradeModel tradeDoubled = new TradeModel();
      tradeDoubled.price = trade.price * 2;
      tradeDoubled.quantity = trade.quantity;
      tradeDoubled.ticker = trade.ticker;
      return new KeyValue<>(id, tradeDoubled);
    }).to("ARCTYPE.doubled-trades", Produced.with(idSerde, tradeModelSerde));
  }
}
Key SerDe için deserializer şöyledir. Serializer da benzer şekilde yazılır
public class IdDeserializer implements Deserializer<String> {
  private ObjectMapper objectMapper = new ObjectMapper();

  @Override
  public void configure(Map<String, ?> props, boolean isKey) { }

  @Override
  public void close() { }

  @Override
  public String deserialize(String topic, byte[] bytes) {
    if (bytes == null)
      return null;

    String id;
    try {
      Map payload = objectMapper.readValue(new String(bytes), Map.class);
      id = String.valueOf(payload.get("id"));
    } catch (Exception e) {
      throw new SerializationException(e);
    }
    return id;
  }
}
Value SerDe için SerdeFactory sınıfı şöyledir
public class SerdeFactory {
  public static <T> Serde<T> createSerdeFor(Class<T> clazz, boolean isKey) {
    Map<String, Object> serdeProps = new HashMap<>();
    serdeProps.put("Class", clazz);

    Serializer<T> ser = new JsonSerializer<>();
    ser.configure(serdeProps, isKey);

    Deserializer<T> de = new JsonDeserializer<>();
    de.configure(serdeProps, isKey);

    return Serdes.serdeFrom(ser, de);
  }
}
JsonDeserializer şöyle
public class JsonDeserializer<T> implements Deserializer<T> {
  private ObjectMapper objectMapper = new ObjectMapper();
  private Class<T> clazz;

  @Override
  public void configure(Map<String, ?> props, boolean isKey) {
    clazz = (Class<T>) props.get("Class");
  }
  @Override
  public void close() { }
  @Override
  public T deserialize(String topic, byte[] bytes) {
    if (bytes == null)
      return null;

    T data;
    Map payload;
    try {
      payload = objectMapper.readValue(new String(bytes), Map.class);
      // Debezium updates will contain a key "after" with the latest row contents.
      Map afterMap = (Map) payload.get("after");
      if (afterMap == null) {
        // Non-Debezium payloads
        data = objectMapper.readValue(objectMapper.writeValueAsBytes(payload), clazz);
      } else {
        // Incoming from Debezium
        data = objectMapper.readValue(objectMapper.writeValueAsBytes(afterMap), clazz);
      }

    } catch (Exception e) {
      throw new SerializationException(e);
    }
    return data;
  }
}
Value nesnes şöyle
@JsonIgnoreProperties(ignoreUnknown = true)
public class TradeModel {
    public Integer id;
    public String ticker;
    public Integer price;
    public Integer quantity;
}
stream metodu
Örnek
Şöyle yaparız
StreamsBuilder streamsBuilder = new StreamsBuilder();
KStream<String,String> ks0 = streamsBuilder.stream(IAppConfigs.ORDER_RECEIVED_TOPIC);
...
Topology topology = streamsBuilder.build();
Properties streamConfig = ...;
KafkaStreams kafkaStreams = new KafkaStreams(topology, streamConfig);
kafkaStreams.start();

29 Kasım 2021 Pazartesi

Jakarta EE JBatch

Giriş 
Açıklaması şöyle
According to the description from the Jakarta Batch (JBatch) official website, the JBatch specification provides the following:

The Jakarta Batch project describes the XML-based job specification language (JSL), Java programming model, and runtime environment for batch applications for the Java platform.

The specification ties together the Java API and the JSL (XML) allowing a job designer to compose a job in XML from Java application artifacts and conveniently parameterize them with values for an individual job. This structure promotes application reuse of artifacts across different jobs.

The specification allows the flexibility for batch jobs to be scheduled or orchestrated in any number of ways, and stops short of defining any APIs or constructs regarding scheduling or orchestration of multiple or repeated jobs.
Açıklaması şöyle. Eğer Jakarta EE container kullanmak istemiyorsak, IBM Helidon kullanılabilir.
The usage of JBatch is really vast. Practically everywhere in the enterprise (EE) world there are millions of batch jobs running on millions of servers. The JBatch spec was created to make these types of tasks portable across the enterprise solutions in the Java/Jakarta EE world.

And yes, this is just a specification and not a complete implementation — every vendor has to provide its own implementation, but the specification itself is not standalone. It is very specific and depends heavily on other specs, like JTA and JPA, for example. This means if you want to run JBatch jobs, then you need an Enterprise Server that supports full EE spec.
AbstractBatchlet Sınıfı
Bir step'i temsil eder
Örnek
Şöyle yaparız
import javax.batch.api.AbstractBatchlet;
import javax.inject.Named;

@Named
public class MyBatchlet extends AbstractBatchlet {

  @Override
  public String process() {
    System.out.println("Running inside a batchlet");
   return "COMPLETED";
  }
}
AbstractItemReader Sınıfı
Örnek
Şöyle yaparız
import javax.batch.api.chunk.AbstractItemReader;
import javax.inject.Named;

public class MyInputRecord {
  ...
}

@Named
public class MyItemReader extends AbstractItemReader {

  private final StringTokenizer tokens;

  public MyItemReader() {
    tokens = new StringTokenizer("1,2,3,4,5,6,7,8,9,10", ",");
  }

  @Override
  public MyInputRecord readItem() {
    if (tokens.hasMoreTokens()) {
      return new MyInputRecord(Integer.valueOf(tokens.nextToken()));
    }
    return null;
  }
}
ItemProcessor Arayüzü
Örnek
Şöyle yaparız
import javax.batch.api.chunk.ItemProcessor;
import javax.inject.Named;

public class MyOutputRecord {
  ...
}

@Named
public class MyItemProcessor implements ItemProcessor {

  @Override
  public MyOutputRecord processItem(Object t) {
    System.out.println("processItem: " + t);
    MyInputRecord record = (MyInputRecord) t;
    return record.getId() % 2 == 0) ? null : 
      new MyOutputRecord(record.getId() * 2);
  }
}
AbstractItemWriter Sınıfı
Örnek
Şöyle yaparız
import javax.batch.api.chunk.AbstractItemWriter;
import javax.inject.Named;

@Named
public class MyItemWriter extends AbstractItemWriter {

  @Override
  public void writeItems(List list) {
    System.out.println("writeItems: " + list);
  }
}

24 Kasım 2021 Çarşamba

Kafka Producer API

Serializer ve Deserializer Arayüzleri
Örnek
Elimizde şöyle bir kod olsun
import com.kafka.message.ExchangeProtoMessage.ProtMessage;
import org.apache.kafka.common.serialization.Serializer;

public class ProtMessageSerializer implements Serializer<ProtMessage>{
    @Override
    public byte[] serialize(String topic, ProtMessage data) {
        return data.toByteArray();
    }
}


import com.google.protobuf.InvalidProtocolBufferException;
import com.kafka.message.ExchangeProtoMessage.ProtMessage;
import org.apache.kafka.common.serialization.Deserializer;

public class ProtMessageDeserializer implements Deserializer<ProtMessage>{
    @Override
    public ProtMessage deserialize(String topic, byte[] data) {
        try {
            return ProtMessage.parseFrom(data);
        } catch (InvalidProtocolBufferException e) {
            e.printStackTrace();
            throw new RuntimeException("excepiton while parsing");
        }
    }
}
Producer tarafında şöyle yaparız
import com.kafka.message.ExchangeProtoMessage.ProtMessage;
import com.kafka.model.ProtMessageSerializer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.IntegerSerializer;


public class MyKafkaProducerWithProtobufModel {

  public static void main(String[] args) {

   Properties props = new Properties();
   props.put("bootstrap.servers", "localhost:9092");
   props.put("linger.ms", 1);
   props.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer");
   props.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");

   Producer<Integer, ProtMessage> producer = 
    new KafkaProducer<>(props, new IntegerSerializer(), new ProtMessageSerializer());
   for (int i = 1; i <= 10; i++){
     producer.send(new ProducerRecord<>("myFirstTopic", 0, i, 
      ProtMessage.newBuilder()
        .setId(i)
        .setName(i + "proto value")
        .build()));
   }
   producer.close();
  }
}
Consumer tarafında şöyle yaparız
import com.kafka.message.ExchangeProtoMessage.ProtMessage;
import com.kafka.model.ProtMessageDeserializer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.IntegerDeserializer;

public class MyKafkaConsumerWithProtobufModel {

  public static void main(String[] args) {
    Properties props = new Properties();
    props.setProperty("bootstrap.servers", "localhost:9092");
    props.setProperty("group.id", "test");
    props.setProperty("enable.auto.commit", "true");
    props.setProperty("auto.commit.interval.ms", "1000");
    props.setProperty("key.deserializer", 
      "org.apache.kafka.common.serialization.StringDeserializer");
    props.setProperty("value.deserializer", 
      "org.apache.kafka.common.serialization.StringDeserializer");

    KafkaConsumer<Integer, ProtMessage> consumer = 
      new KafkaConsumer<>(props, new IntegerDeserializer(),
        new ProtMessageDeserializer());
    consumer.subscribe(Arrays.asList("myFirstTopic"));

    while (true) {
      ConsumerRecords<Integer, ProtMessage> records = consumer
        .poll(Duration.ofMillis(100));
      for (ConsumerRecord<Integer, ProtMessage> record : records) {
        System.out.println("Received message: (" + record.key() +  ", " + 
          record.value().toString() + ") at offset " + record.offset());
      }
    }
  }
}


ProducerRecord Sınıfı
Şu satırı dahil ederiz
import org.apache.kafka.clients.producer.ProducerRecord;
constructor 
Örnek - topic + key + value
Şöyle yaparız
KafkaProducer<Integer, String> producer;

private KafkaProducer<Integer, String> getProducer() {
  if (producer == null) {
    Properties producerProps = new Properties();
    producerProps.setProperty("bootstrap.servers", ...);
    producerProps.setProperty("key.serializer", 
      IntegerSerializer.class.getCanonicalName());
    producerProps.setProperty("value.serializer", 
      StringSerializer.class.getCanonicalName());
    producer = new KafkaProducer<>(producerProps);
  }
  return producer;
}

public Future<RecordMetadata> produce(String topic, Integer key, String value) {
  return getProducer().send(new ProducerRecord<>(topic, key, value));
}
constructor - topic + partition + timestamp + key + value
Örnek
Şöyle yaparız
Future<RecordMetadata> produce(String topic, int partition, Long timestamp, 
  Integer key, String value) {
  return getProducer().send(new ProducerRecord<>(topic, 
    partition, timestamp, key, value));
}



RediSearch Sınıfı

Maven
Şu satırı dahil ederiz
<dependency>
  <groupId>com.redislabs</groupId>
  <artifactId>jredisearch</artifactId>
  <version>2.0.0</version>
</dependency>
createIndex metodu
Şöyle yaparız
import io.redisearch.Schema;
import io.redisearch.client.Client;
import io.redisearch.client.IndexDefinition;
import redis.clients.jedis.Jedis;

JedisPool pool = ...;
Client redisearch = = new Client"tweets-index",pool);

 Schema sc = new Schema()
  .addTextField"id", 1.0)
  .addTextField("user", 1.0)
  .addTextField("text", 1.0)
  .addTextField("location", 1.0)
  .addTagField("hashtags");

IndexDefinition def = new IndexDefinition()
  .setPrefixes(new String[] { "tweet:" });

boolean indexCreated = redisearch
  .createIndex(sc, 
               Client.IndexOptions.defaultOptions().setDefinition(def));

redisearch.close();



14 Kasım 2021 Pazar

Collectors.teeing metodu - Stream'i İki Farklı Collector'a Verir

Giriş
Java 12 ile geliyor. Her elemanı iki tane downstream collector'a verir. Bunların sonuçlarını alır ve merge function ile birleştirir. Dolayısıyla bizi Custom Collector yazmaktan kurtarır. 

Açıklaması şöyle. Yani teeing() aslında bir composite. Tüm kodda önenli olan şey en son çıktı olarak kullandığımız nesnenin iki tane sonuç alacak şekilde olması
Returns a Collector that is a composite of two downstream collectors. Every element passed to the resulting collector is processed by both downstream collectors, then their results are merged using the specified merge function into the final result.

The resulting collector functions do the following:

* supplier: creates a result container that contains result containers obtained by calling each collector’s supplier

* accumulator: calls each collector’s accumulator with its result container and the input element

* combiner: calls each collector’s combiner with two result containers

* finisher: calls each collector’s finisher with its result container, then calls the supplied merger and returns its result.
Örnek
Şöyle yaparız. Burada ilk collector toplamı buluyor, ikinci collector nesneleri sayıyor, son merge metodu ise her iki collector'ın sonuçlarını alıp ortalamayı buluyor
double mean = Stream.of(1, 2, 3, 4, 5)
  .collect(Collectors.teeing(
    summingDouble(i -> i),
    counting(),
    (sum, n) -> sum / n));

System.out.println(mean);
Örnek
Elimizde şöyle bir kod olsun. Bu bizin en son çıktımızı temsil ediyor. Dolayısıyla constructor içine iki parametre alıyor
public class PriceAndRows {

    private BigDecimal price;                             
    private final List<CartRow> rows = new ArrayList<>();  

    PriceAndRows(BigDecimal price, List<CartRow> rows) {
        this.price = price;
        this.rows.addAll(rows);
    }

    PriceAndRows() {
        this(BigDecimal.ZERO, new ArrayList<>());
    }
}
Şöyle yaparız
public PriceAndRows getPriceAndRows(Cart cart) {
  return cart.getProducts()
      .entrySet()
      .stream()
      .map(CartRow::new)                           // 1
      .collect(Collectors.teeing(                  // 2
          Collectors.reducing(                     // 3
              BigDecimal.ZERO,                     // 4
              CartRow::getRowPrice,                // 5
              BigDecimal::add),                    // 6
          Collectors.toList(),                     // 7
          PriceAndRows::new                        // 8
      ));
}
Açıklaması şöyle
1 Map each Entry to a CartRow
2. Call the teeing() method
3. The first collector computes the price. It’s a simple reducing() call, with:
4. The starting element
5. A function to extract a Price from a CartRow
6. A BinaryOperator to add two prices together
7. The second collector aggregates the CartRow into a list
8. Finally, the last parameter creates a new object that aggregates the results from the first and the second collector
Örnek
Şöyle yaparız.
Object[] array = list.stream()
    .collect(Collectors.teeing(
        Collectors.reducing(1, a -> (Integer)a[0], (a,b) -> a * b),
        Collectors.mapping(a -> (String)a[1], Collectors.joining()),
        (i,s) -> new Object[] { i, s}
    ));



12 Kasım 2021 Cuma

java komutu Off-heap İçin -X Seçenekleri

Giriş
Açıklaması şöyle. Off-heap yerine bazen "Direct Memory" de kullanılıyor.
Any memory the JVM uses besides the heap is considered off-heap. 
Off-heap bellek dışında iki bellek daha var. Bunlar
1. Direct Buffer
2. Native Memory allocations through JNI

Direct Memory vs Direct Buffer
 "Direct Memory" ve "Direct Buffer" farklı şeylerdir. Direct Buffer genelde low level I/O için kullanılır

Açıklaması şöyle
Even though the JVM does not track its direct byte buffer memory usage by default, we can measure these buffers’ sizes using a trick involving the DirectByteBuffer class. This class stores the location and size of a single direct byte buffer, so if we sum the size field of all DirectByteBuffer instances we’ll end up with our answer. There are some technicalities though. Firstly, some DirectByteBuffer instances may point at the physical memory owned by other instances (called “viewed” buffers). Secondly, DirectByteBuffer instances whose backing memory is likely already deallocated may still be around (“phantomed” or “phantom-reachable” buffers).

Luckily, there are some tools around to deal with these problems. For instance, IBM’s Eclipse Memory Analyzer extensions have a feature to calculate the size of non-viewed non-phantomed direct byte buffers.
Native Memory allocations through JNI

Off-heap İçin -X Seçenekleri

-XX:MaxDirectMemorySize seçeneği
JVM'in ne kadar "native" bellek alanı kullanabileceğini belirtir. Açıklaması şöyle.
The internal JVM limit is set as follows:

By default, it’s equal to  -Xmx. Yes, the JVM heap and off-heap memory are two different memory areas, but by default, they have the same maximum size.

The limit can be changed using -XX:MaxDirectMemorySize  property. This property accepts acronyms like “g” or “G” for gigabytes, etc.
XX:NativeMemoryTracking seçeneği

7 Kasım 2021 Pazar

Just In Time Compilation - JIT Intrinsics Optimizasyonu - İçselleştirme

Giriş
Just In Time Compilation - JIT yaparken bir sürü optimizasyon yapılabilir. Bunlardan bir tanesi de Intrinsics Optimizasyonu. Intrinsic kelime anlamı olarak içselleştirme demek. Ya da ben böyle tercüme ettim :)

Yazılım dünyasında Intrinsic function denilince built-in function anlaşılır. Örneğin bash kabuğunda da bir sürü built-in metod ve değişken vardır.

Intrinsics Optimizasyonu Nedir?
Açıklaması şöyle. Yani JVM'deki bytecode yerine direkt daha hızlı assembly kodu kullanılabilir.
Depending on the architecture the JVM is running on, the bytecode may not even be used at all. The HotSpot JVM uses a concept called “intrinsics” which is a list of well-known methods that will be replaced with specific assembler instructions known to be fast. Good examples are the methods in java.lang.MathSystem#arraycopy or Object#getClass (see @HotSpotIntrinsicCandidate).
Bir başka açıklama şöyle.
The JIT knows about intrinsics, so it can inline the relevant machine instruction into the code it's JITing, and optimize around it as part of a hot loop.
HotSpot tarafından kullanılan Intrinsics  metodlar burada

Örnek
Math.sqrt metodundaki açıklama şöyle. Zaten açıklamada da Intrinsics Optimizasyonu yapılabileceği belirtiliyor. Yani Math.sqrt() metodu sanıldığı kadar yavaş değil.
// Note that hardware sqrt instructions
// frequently can be directly used by JITs
// and should be much faster than doing
// Math.sqrt in software.
Örnek
JDK'daki bazı sınıflarındaki kodlarda jdk.internal.vm.annotation.IntrinsicCandidate kullanılır. Şöyledir
@IntrinsicCandidate
public final long getAndAddLong(Object o, long offset, long delta) {
  long v;
  do {
    v = getLongVolatile(o, offset);
  } while (!weakCompareAndSetLong(o, offset, v, v + delta));
  return v;
}

2 Kasım 2021 Salı

Garbage Collector Patterns

Giriş
GC Pattern'ları grafiksel olarak bakınca daha net görülüyor. Bunlar şöyle

1. Healthy saw-tooth pattern - Sağlıklı
2. Heavy caching pattern - Kabul edilebilir

3. Acute memory leak pattern - Düzeltilmeli
4. Consecutive Full GC pattern - Belki Düzeltilebilir
5. Memory Leak Pattern - Düzeltilmeli

1. Healthy saw-tooth pattern
Bu örüntüden CG gerçekleşince, heap kullanımı tekrar eski seviyeye düşer. Testere dişleri gibidir. Şeklen şöyle
Bir diğer şekil de şöyle. Heap bitince hemen hemen tüm bellek boşaltılıyor.


2. Heavy caching pattern - Kabul edilebilir
Burada aslında testere dişleri var gibi görünse de, heap kullanımı en alt seviyeye düşmüyor. Yani uygulama çok fazla şeyi bellekte saklamaya çalışıyor. Şeklen şöyle

3. Acute memory leak pattern - Düzeltilmeli
Heap kullanımı sürekli artar. Şeklen şöyle

Bir diğer şekil de şöyle. Bu örnekte de OOM exception ile uygulama sonlanır.


4. Consecutive Full GC pattern - Belki Düzeltilebilir
Uygulamaya aniden yük binince belli bir süre boyunca ardışık olarak GC döngüsüne girer. Aslında yapacak çok bir şey yok. Yükü dağıtmak denenebilir. Şeklen şöyle

5. Memory Leak Pattern - Düzeltilmeli
Uygulamada bir olay olur ve ondan sonra sızıntı olmaya başlar. Şeklen şöyle


















Just In Time Compilation - JIT C1 ve C2 Derleyici

Derleyici Seviyeleri
Açıklaması şöyle
C1 compiler is responsible for levels 1, 2, and 3 compilations and optimizations. C2 compiler is responsible for level 4 compilation and optimization.
Eski Kullanım
Açıklaması şöyle. Yani eskiden JDK içinde ya C1 ya da sadece C2 vardı.
During the early days of Java, there were two types of JIT compilers:
  • Client
  • Server

Based on what type of JIT compiler you want to use, appropriate JDKs have to be downloaded and installed. If you are building a desktop application, then JDK with a "client" JIT compiler needs to be downloaded. If you are building a server application, then JDK that has a "server" JIT compiler needs to be downloaded.

Client JIT compiler starts compiling the code as soon as the application starts. Server JIT compiler will observe the code execution for quite some time. Based on the execution knowledge it gains, it will start doing the JIT compilation. Even though server JIT compilation is slow, the code it produces will be far more superior and performant than the one produced by the client JIT compiler.

Today modern JDKs are shipped with both client and server JIT compilers. Both of the compilers try to optimize the application code. During the application startup time, code is compiled using the client JIT compiler. Later as more knowledge is gained, code is compiled using the server JIT compiler. This is called tiered compilation in JVM.

JDK developers were calling them client and server JIT compilers, internally as C1 and C2 compilers. Thus, the threads used by the client JIT compiler are called C1 compiler threads. Threads used by the server JIT compiler are called C2 compiler threads.
JDK içinde artık iki tane derleyici olduğunu belirten bir açıklama şöyle
Additionally, there are two types of compiler optimizations which are part of JDK — a client-side offering (-client), and a VM tuned for server applications (-server). Although the Server and the Client VMs are similar, the Server VM has been specially tuned to maximize peak operating speed. 

The Client VM compiler serves as an upgrade for both the Classic VM and the just-in-time (JIT) compilers used by previous versions of the JDK. The Client VM compiler does not try to execute many of the more complex optimizations performed by the compiler in the Server VM, but in exchange, it requires less time to analyze and compile a piece of code. 

The Server VM contains an advanced adaptive compiler that supports many of the same types of optimizations performed by optimizing C++ compilers, as well as some optimizations that cannot be done by traditional compilers.
Dolayısıyla
-client seçeneği ile C1 Compiler devreye girer ve fazla optimizasyon yapılmaz. 
-server seçeneği ile C2 compiler devreye girer ve daha fazla optimizasyon yapılır.

C2 Neden Hemen Çalışmıyor
Açıklaması şöyle. Amaç uzun vadede kodu gözlemlemek ve en iyi optimizasyonu yapmak
The reason Java does not compile the code at start-up has to do with long-term performance optimisation. By observing the application run and analysing real-time methods invocations and class initialisations, Java compiles frequently called portions of code. It might even make some assumptions based on experience (this portion of code never gets called or this object is always a String).
64 bit HotSpot
Normalde -client seçeneği ile C1 compiler etkinleştirilir. Ancak 64 bit HotSpot  JVM bu seçeneği dikkate almıyor. Kullansak bile bir etkisi olmuyor. Açıklaması şöyle
A 64-bit capable JDK currently ignores this option and instead uses the Java Hotspot Server VM.
Normalde -server seçeneği ile C2 compiler etkinleştirilir.  Ancak 64 bit HotSpot  JVM bu seçeneği kullanmasak bile kendisi örtülü bir şekilde etkinleştiriyor. Açıklaması şöyle
-server

Select the Java HotSpot Server VM. On a 64-bit capable jdk only the Java HotSpot Server VM is supported so the -server option is implicit. This is subject to change in a future release..
Yani neticede  64 bit HotSpot  JVM için ne -client ne de -server seçeneğini kullanmaya gerek yok :)
Bu arada C2 derleyiciyi idame ettirmek bayağı zor deniyor. Açıklaması şöyle
Hotspot is getting old

The JVM traditionally uses the Hotspot JIT compiler, which is made of 2 compilers:
- C1 emits simple native code, but which is still faster than executing bytecode in an interpreter, and
- C2 is a more aggressive compiler that generates better native code based on execution profiles, but it may frequently de-optimize.

C2 is the compiler that gives performance, but it is and older, complex code base written in C++. Very few people on this planet have the ability to maintain it.