25 Şubat 2021 Perşembe

Hazelcast ITopic Arayüzü

Giriş
Şu satırı dahil ederiz
import com.hazelcast.core.ITopic;
constructor
Örnek - Topic
Şöyle yaparız
HazelcastInstance hzInstance = ...;
ITopic<StockPrice> topic = hzInstance.getTopic(topicName);
Örnek - Reliable Topic
Şöyle yaparız
HazelcastInstance hz = ...
ITopic<Long> topic = hz.getReliableTopic("sometopic");
addMessageListener metodu
Örnek
Şöyle yaparız
import com.hazelcast.topic.Message;
import com.hazelcast.topic.MessageListener;

HazelcastInstance hz = ...;
ITopic<Long> topic = hz.getReliableTopic("sometopic");
topic.addMessageListener(new MessageListenerImpl());


MessageListenerImpl implements MessageListener<Long> {
  public void onMessage(Message<Long> m) {
    System.out.println("Received: " + m.getMessageObject());
  }
}
Örnek - Lambda
Şöyle yaparız. Node1 publish, Node 2 ise subscribe işlemini lambda ile gerçekleştirir. 
// node #1
Hazelcast.newHazelcastInstance()
         .getTopic("topic")
         .publish(new Date());

// node #2
Hazelcast.newHazelcastInstance()
         .getTopic("topic");
         .addMessageListener(message -> /*Do something here*/);
publish metodu
Gönderilecek nesne Serializable'dan kalıtır.
Örnek - Object
Şöyle yaparız
StockPrice price = ...
topic.publish(price);
Örnek - long
Şöyle yaparız
long messageId = ...
topic.publish(messageId);



24 Şubat 2021 Çarşamba

Hazelcast Client HazelCastClient Sınıfı

Giriş
Şu satırı dahil ederiz.
import com.hazelcast.client.HazelcastClient;
newHazelCastClient metodu
Örnek
Şöyle yaparız
// Start the Hazelcast Client and connect to an already running Hazelcast Cluster
// on 127.0.0.1

HazelcastInstance hz = HazelcastClient.newHazelcastClient();
newHazelCastClient metodu - ClientConfig
Örnek
Şöyle yaparız.
ClientConfig config = new ClientConfig();
...
HazelcastInstance client = HazelcastClient.newHazelcastClient(config);

Hazelcast Config Sınıfı

Giriş
Şu satırı dahil ederiz. Bu sınıf kullanılarak Hazelcast cluster'ın bir parçası olan yeni bir düğüm yaratılır. Cluster'a katıldığımız için ismi "In Memory Data Grid"
import com.hazelcast.config.Config;
Bu sınıfla birlitek kullanılan diğer sınıflar şöyle
NetworkConfig : Network ayarları için
MapConfig -> EvictionConfig, TTL ayarları için
addMapConfig metodu
Örnek
Şöyle yaparız. Burada setTimeToLiveSeconds(-1) yapılıyor ancak aslında değerin 0 - Integer.MAX_VALUE arasında olması gerekir. 0 atanırsa TTL değeri yoktur, sonsuza kadar kalsın denilir. MapConfig'e setName() ile verilen isim, Spring kullanırken farklı cache'lere isim ile erişmek için işe yarar
import com.hazelcast.config.Config;
import com.hazelcast.config.EvictionPolicy; import com.hazelcast.config.MapConfig; import com.hazelcast.config.MaxSizeConfig; @Bean public Config hazelCastConfig(){ Config config = new Config(); config.setInstanceName("hazelcast-instance") .addMapConfig( new MapConfig() .setName("configuration") .setMaxSizeConfig(new MaxSizeConfig(200, MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE)) .setEvictionPolicy(EvictionPolicy.LRU) .setTimeToLiveSeconds(-1)); return config; }
Örnek
Elimizde şöyle bir kod olsun
@Bean
public Config hazelCastConfig(){
  return new Config()
    .setInstanceName("hazelcast-instance")
    .addMapConfig(
      new MapConfig()
        .setName("regularly-changed-value-cache")
        .setMaxSizeConfig(new MaxSizeConfig(200,
MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE))
        .setEvictionPolicy(EvictionPolicy.LRU)
        .setTimeToLiveSeconds(0))
    .addMapConfig(
      new MapConfig()
        .setName("irregularly-changed-value-cache")
        .setMaxSizeConfig(new MaxSizeConfig(200,
MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE))
        .setEvictionPolicy(EvictionPolicy.LRU)
        .setTimeToLiveSeconds(0))
  ;
}
Şöyle yaparız
import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.CacheConfig;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.Cacheable;
@Service
@CacheConfig(cacheNames = "regularly-changed-value-cache")
public class LongStringServiceImpl implements LongStringService {
    
  @CacheEvict(allEntries = true)
  public void clearCache(){}

  @Override
  @Cacheable
  public String changingIrregularly() {
    return "...";
  }
}
getCPSubSystemConfig metodu
Açıklaması şöyle
Please note that Hazelcast IMDG implementation too falls under the AP category of the CAP system. However, strong consistency (even in failure/exceptional cases) is a fundamental requirement for any tasks that require distributed coordination. Hence, there are cases where the existing locks based on map implementation will fail. To address these issues, Hazelcast later came up with the CPSubsystem implementation. CPSubsystem has got a new distributed lock implementation on top of Raft consensus. The CPSubsystem lives alongside AP data structures of the Hazelcast IMDG cluster. CPSubsystem maintains linearizability in all cases, including client and server failures, network partitions, and prevent split-brain situations. In fact, Hazelcast claims that they are the one and only solution which offers a linearizable and distributed lock implementation. 
Örnek
Şöyle yaparız
Config config = new Config();
CPSubsystemConfig cpSubsystemConfig = config.getCPSubSystemConfig(); cpSubsystemConfig.setCPMemberCount(3); cpSubsystemConfig.setGroupSize(3); HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(config);
getMapConfigs metodu
Örnek
Şöyle yaparız
Config config = new Config().setClusterName("Sample Hz Cluster");

EvictionConfig evictionConfig = new EvictionConfig()
  .setEvictionPolicy(EvictionPolicy.LRU)
  .setSize(...)
  .setMaxSizePolicy(MaxSizePolicy.PER_NODE);
MapConfig mapConfig = new MapConfig("...")
  .setEvictionConfig(evictionConfig)
  .setTimeToLiveSeconds(...)
  setMaxIdleSeconds(...)
config.getMapConfigs().put("...",mapConfig);
setClusterName metodu
Örnek
Şöyle yaparız
Config config = new Config().setClusterName("Sample Hz Cluster");

setInstanceName metodu
Örnek ver

setManagementCenterConfig metodu
Örnek
Şöyle yaparız
import com.hazelcast.config.Config;
import com.hazelcast.config.ManagementCenterConfig;
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;

@Bean
public Config hazelCastConfig() {
  return new Config().setManagementCenterConfig(
    new ManagementCenterConfig()
.setEnabled(true)
.setUrl("http://localhost:8080/hazelcast-mancenter"));
}

@Bean
public HazelcastInstance hazelcastInstance(Config hazelCastConfig) {
  return Hazelcast.newHazelcastInstance(hazelCastConfig);
}
setNetworkConfig metodu
NetworkConfig yazısına taşıdım.

setPartitionGroupConfig metodu
Örnek ver

setProperty metodu
Örnek ver

Hazelcast IMap Arayüzü

Giriş
Şu satırı dahil ederiz
import com.hazelcast.core.IMap;
Açıklaması şöyle
com.hazelcast.map.IMap extends java.util.Map. So there is a lesser learning curve here. The distributed map implementation has a method to lock a specific key. If the lock is not available, the current thread is blocked until the lock has been released. We can get a lock on the key even if it is not present in the map. If the key does not exist in the map, any thread apart from the lock owner will get blocked if it tries to put the locked key in the map.
Aslında List arayüzü de aynı Java'daki gibi kullanılabiliyor. Açıklaması şöyle
The data structures are standard ones like Map, List, or Queue (in Java there are just other implementations of the standard Java interfaces for java.util.List or java.util.Map).

Kullanım

Örnek
Şöyle yaparız. Spring ile JSON kullanmak için özel bir ayar yapmadıysak, Key ve Value nesnelerinin Serializable olması gerekir.
public class Doctor implements Serializable {
...
}

@Bean
public Map<String, Doctor> doctorMap(HazelcastInstance hazelcastInstance) {
  return hazelcastInstance.getMap("doctorMap");
}

@Autowired
private Map<String, Doctor> doctorMap;

@GetMapping(path = { "/get/{doctorNumber}" })
public Doctor getDoctor(@PathVariable("doctorNumber") String doctorNumber) {
  //First call is to check if doctormap has doctor details if yes,
//return the value otherwise call database.
  Doctor doctor = doctorMap.get(doctorNumber);
  if (doctor == null){
    doctor = ...; 
  }
  return doctor;
}

@PostMapping("/add")
public void createDoctor(@RequestBody Doctor doctor) {
  //save doctor details in cache
  doctorMap.put(doctor.getDoctorNumber(), doctor);
  ...
}
@DeleteMapping(path = { "/delete/{doctorNumber}" })
public Doctor deleteDoctor(@PathVariable("doctorNumber") String doctorNumber) {
  //remove doctor details from both cache and database
  doctorMap.remove(doctorNumber);
  ...
}
constructor
HazelcastInstance.getMap() tarafından döndürülür
Örnek
Şöyle yaparız
IMap<String, String> map = hazelcastInstance.getMap("my-map");
addEntryListener
Örnek
Şöyle yaparız
import com.hazelcast.core.EntryEvent;
import com.hazelcast.core.EntryListener;
import com.hazelcast.map.MapEvent;

public class MapEntryListener implements EntryListener {

  @Override
  public void entryAdded(EntryEvent entryEvent) {
    logger.info("key {} and value {}",entryEvent.getKey(),entryEvent.getValue());
  }
  @Override
  public void entryEvicted(EntryEvent entryEvent) {
    logger.info("Map Entry was evicted : {}",entryEvent);
  }
  @Override
  public void entryRemoved(EntryEvent entryEvent) {
    logger.info("Object with key {} removed from map.",entryEvent.getKey());
  }
  @Override
  public void entryUpdated(EntryEvent entryEvent) {
    logger.info("key {} updated from {} to {}.", entryEvent.getKey(),
entryEvent.getOldValue(),entryEvent.getValue());
  }
  @Override
  public void mapCleared(MapEvent mapEvent) {
    logger.info("Map was cleared : {}",mapEvent);
  }
  @Override
  public void mapEvicted(MapEvent mapEvent) {
    logger.info("Map was evicted: {}",mapEvent);
  }
  @Override
  public void entryExpired(EntryEvent entryEvent) {
  }
}
IMap<String,String> hazelcastMap = ...;
hazelcastMap.addEntryListener(mapEntryListener,true);
get metodu
Örnek
Şöyle yaparız
public String getDataByKey(String key) {
  IMap<String, String> map = hazelcastInstance.getMap("my-map");
  return map.get(key);
}
lock metodu
Örnek
Şöyle yaparız
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.core.IMap;

HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();
IMap txLockMap = hazelcast.getMap("txLockMap");
String lock = "...";
txLockMap.lock(key);

try {
  txLockMap.tryLock(key,10,TimeUnit.SECONDS);
} catch (Exception e){
  ...
}

txLockMap.isLocked(key);

try {
  txLockMap.unlock(key);
} catch (Exception e){
  ...
}
put metodu
Örnek
Şöyle yaparız
public String createData(String key, String value) {
  IMap<String, String> map = hazelcastInstance.getMap("my-map");
  map.put(key, value);
  return "Data is stored.";
}
remove metodu
Örnek
Örnek
Şöyle yaparız
public String deleteData(String key) {
  IMap<String, String> map = hazelcastInstance.getMap("my-map");
  return map.remove(key);
}
set metodu
Örnek
Şöyle yaparız
public String update(String key, String value) {
  IMap<String, String> map = hazelcastInstance.getMap("my-map");
  map.set(key, value);
  return "Data is stored.";
}

23 Şubat 2021 Salı

MinIO API

Giriş
docker-compose.yml dosyasında şöyle yaparız
minio1:
  image: minio/minio:RELEASE.2020-08-27T05-16-20Z
  volumes:
    - data1-1:/data1
    - data1-2:/data2
  ports:
    - "9001:9000"
  environment:
    MINIO_ACCESS_KEY: minio
    MINIO_SECRET_KEY: minio123
  command: server http://minio{1...4}/data{1...2}
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
    interval: 30s
    timeout: 20s
    retries: 3

  minio2:
    ...
  minio3:
    ...
  minio4:
    ...
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
  data1-1:
  data1-2:
  data2-1:
  data2-2:
  data3-1:
  data3-2:
  data4-1:
  data4-2:
Açıklaması şöyle
In this configuration, we are running MinIO in distributed mode. Basically, it can withstand multiple node failures and yet ensure full data protection because the drives are distributed across several nodes.

To run it in this mode, we need four disks according to the requirements. You can see that we named them minio1, minio2, minio3, minio4.

To start a distributed MinIO instance, we pass the drive locations as parameters to the minio server command. All nodes should have the same access key and secret key for the nodes to connect. Note that we have created login credentials MINIO_ACCESS_KEY: minio and MINIO_SECRET_KEY: minio123 . Feel free to change them as you wish.
MinioClient Sınıfı
constructor
Şöyle yaparız
private MinioClient getMinioClient() {
  return MinioClient.builder()
    .endpoint("localhost", 9001, false)
    .credentials("minio", "minio123")
    .build();
}
getObject metodu
Şöyle yaparız
private InputStream downloadOriginalBookAsStream(){
  InputStream stream;
  try {
    stream = getMinioClient().getObject(
      GetObjectArgs.builder()
        .bucket("original-ebook")
        .object("alice.epub")
        .build());
  } catch (InvalidKeyException | NoSuchAlgorithmException | ErrorResponseException | 
    InvalidResponseException | InvalidBucketNameException | ServerException | 
    XmlParserException | InsufficientDataException |
    InternalException | IOException e) {
    
    System.err.println(e.getMessage());
    throw new IllegalArgumentException("The original ebook file was not found");
  }
  return stream;
}
getPresignedObjectUrl metodu
Şöyle yaparız
private String createURL(MinioClient minioClient, String filename) throws
  IOException, InvalidKeyException, InvalidResponseException, 
  InsufficientDataException, InvalidExpiresRangeException, ServerException, 
  InternalException, NoSuchAlgorithmException, XmlParserException, 
  InvalidBucketNameException, ErrorResponseException {
  
  return minioClient.getPresignedObjectUrl(
    GetPresignedObjectUrlArgs.builder()
      .method(Method.GET)
      .bucket("ebookcreator")
      .object(filename)
      .expiry(2, TimeUnit.HOURS)
      .build());
}
Açıklaması şöyle
This URL will be valid for two hours. You can modify the expiration date by using the expiry parameter.
uploadObject metodu
Şöyle yaparız
private void handleFileUpload(String filename) {

  MinioClient minioClient = getMinioClient();
  try {
            
    ObjectWriteResponse response = createBucketAndUploadFile(minioClient, filename);
    if (response != null) {
      String url = createURL(minioClient, filename);
      System.out.println("Created url: " + url);
    }
  } catch (InvalidKeyException | NoSuchAlgorithmException | ErrorResponseException | 
      InvalidResponseException | InvalidBucketNameException |
      ServerException | RegionConflictException | InvalidExpiresRangeException | 
      XmlParserException | InsufficientDataException |
      InternalException | IOException e) {
      
    System.err.println(e.getMessage());
  }
}
private ObjectWriteResponse createBucketAndUploadFile(MinioClient minioClient,
String filename) throws IOException, InvalidKeyException, InvalidResponseException, InsufficientDataException, NoSuchAlgorithmException, ServerException, InternalException, XmlParserException, InvalidBucketNameException, ErrorResponseException, RegionConflictException { if (!minioClient.bucketExists(BucketExistsArgs.builder()
.bucket("ebookcreator").build())) { minioClient.makeBucket(MakeBucketArgs.builder().bucket("ebookcreator").build()); } return minioClient.uploadObject(UploadObjectArgs.builder() .bucket("ebookcreator") .object(filename) .filename(filename) .contentType("application/epub") .build()); }




22 Şubat 2021 Pazartesi

JPA @Inheritance + InheritanceType.JOINED

Giriş
Yazılımcıların en çok kullandığı en rahat anladıkları JPA kalıtım yöntemi bu. Veri sınıflar benzer şekilde farklı tablolarda tutulur. Ata ve kalıtan sınıflar birbirlerine foreign key ile bağlanır. Bu yüzden JOIN yapılabilir. Yani veri normalizasyona tabi tutulur.

Açıklaması şöyle
The tables are joined via foreign key constraints to the table of their superclass, which contains columns for the inherited properties. Polymorphic queries would use JOIN in the SQL statements using the foreign key columns.
Örnek
Şöyle yaparız.
@Entity
@Table(name="person")
@Inheritance(strategy=InheritanceType.JOINED)
public class Person {

  @Id
  @GeneratedValue
  @Column(name="person_id")
  private int personId;

  @Column(name="name")
  private String name;
}
@Entity
@Table(name="employee")
@PrimaryKeyJoinColumn(name="person_id")
public class Employee extends Person {

  @GeneratedValue
  @Column(name="employee_id")
  private int employeeId;

  @Column(name="salary")
  private int salary;
}
@Entity
@Table(name = "manager")
@PrimaryKeyJoinColumn(name = "employee_id")
public class Manager extends Employee{
  private String branch;
}
Manager nesnesine erişmek için üretilen SQL şöyledir
SELECT
  manager0_.id as id1_4_0_,
  manager0_2_.name as name2_4_0_,
  manager0_1_.employee_id as employee1_1_0_,
  manager0_1_.salary as salary2_1_0_,
  manager0_.branch as branch1_2_0_ 
FROM
  manager manager0_ 
INNER JOIN
  employee manager0_1_ on manager0_.id=manager0_1_.id 
INNER JOIN
  person manager0_2_ on manager0_.id=manager0_2_.id 
WHERE
  manager0_.id=?
Örnek
Şöyle yaparız
@Entity
@Inheritance(strategy = InheritanceType.JOINED)
public abstract class Devices {
  @Id
  @GeneratedValue(strategy = GenerationType.AUTO)
  private int id;
    
  @Column(name="brand")
  private String brand;

  @Column(name="name")
  private String name;
}
@Entity
@PrimaryKeyJoinColumn(name = "computerId")
public class Computer extends Devices {
  private String oS;
  ...
}

@Entity
@PrimaryKeyJoinColumn(name = "mobilephoneId")
public class MobilePhone extends Devices{
  private String color;
  ...
}
Örnek
Title alanına göre tüm Publication'ları çekmek isteseydik SQL şöyle olur. Burada okumayı kolaylaştırmak için sadece id alanını bıraktım. Diğer alanları sildim. SQL'de iki tane Publication'dan kalıtan Book ve Magazine sınıfları olduğu için iki tane left outer join görülebilir.
select publicatio0_.id ,,,
from Publication publicatio0_
left outer join
    Book publicatio0_1_
    on publicatio0_.id=publicatio0_1_.id
left outer join
    Magazine publicatio0_2_
    on publicatio0_.id=publicatio0_2_.id
where publicatio0_.title=?